A civilization is a type of human community or society that has achieved a certain level of culture. It is contrasted with primitive communities lacking this culture. The culture must be comparatively advanced or developed. It would include large-scale political organization and sophisticated expression in a medium such as writing. The term “advanced” implies (perhaps wrongly) that the culture is superior to others.
Arnold Toynbee, the noted British historian, saw civilization as a society which had advanced to a certain level. A “society“ provided “common ground” for human interaction; it was a self-contained community. Searching world history, Toynbee found twenty-one societies which had become civilizations. They were the Egyptiac, Sumeric, Minoan, Hittite, Babylonic, Syriac, Hellenic, Western Christian, Orthodox Christian, Russian Orthodox, Arabic, Iranic, Sinic, Indic, Far Eastern, Japanese Far eastern, Hindu, Mayan, Andean, Yucatec, and Mexic civilizations. Most of these civilizations were extinct. A few yet survive.
The scheme of civilization in Five Epochs of Civilizations departs from the Toynbee model in several respects. First, while Toynbee sees civilizations as geographically distinct regional cultures, McGaughey sees civilization as a stage in a single world-wide culture. Some peoples come to this stage of cultural development earlier or later than others. Second, Toynbee sees civilization as being a society - i.e., a body of people who are politically and culturally organized. McGaughey puts emphasis on the society’s culture. In this scheme, a civilization is a certain cultural configuration, regardless of where it might be embodied.
Civilizations are most easily understood when particular communities of advanced culture spring up in the midst of primitive societies. For example, the city of Babylon became the center of an empire during the reign of Hammurabi (1792-1750 B.C.) Founded by Amorite tribesmen in the late 3rd millennium B.C., it was one of several successor states to the empire of Sumer and Akkad. King Hammurabi of Babylon united most of them during a nine-year military campaign. Then, seven years after Hammurabi’s death, Kassite barbarians attacked the city and its empire fell. Even so, the Babylonian “civilization”, in a sense, lasted for several more centuries since the culture developed under Hammurabi remained dominant in the region during that time.
So it is that people feel comfortable talking about an “Egyptian”, or “Hittite”, or “Minoan” civilization whose societies achieved an advanced culture and then fell. Some reverted to a primitive state while others (such as Egypt) merely lost their political integrity. When we come to the modern era, however, it is less clear who belongs to what civilization since cultures and peoples are substantially intermingled. Toynbee supposes that most present-day Americans and Europeans belong to a “western” (or western Christian) civilization which began at the time of Charlemagne. If so, the civilization is so extended and diffuse in culture as to lack real identity.
The scheme of regional civilizations breaks down at this point. If human communities have substantial contact and awareness of each other, then their “civilizations” must be contaminated with many alien influences. Eventually, the several regional cultures would form a common culture or amalgamation of cultures to varying degrees. It makes more sense, under those circumstances, to talk of a single culture which is worldwide and to differentiate between civilizations on the basis of their degree of cultural progress. Civilizations would describe how this society was organized at particular stages in its history. Each stage would loosely be associated with a particular period of years, recognizing that some communities “progress” more rapidly to a new stage than others do.
The scheme of civilizations in the book, Five Epochs of Civilizations, divides world history into five stages, or epochs, after the first civilized states arose in Egypt and Mesopotamia five to six thousand years ago. Each epoch is associated with a particular civilization (conveniently labeled Civilization I, Civilization II, Civilization III, Civilization IV, and Civilization V). Strictly speaking, civilization is a cultural configuration while an epoch is, of course, a period of time. In a practical sense, however, the two terms are synonymous. An epoch changes when the civilization changes.
Because human culture is subject to various interpretations, there is a need for definitions to introduce some rigor into the discussion. The scheme presented in Five Epochs of Civilization employs two sets of variables in defining civilizations. First would be the type of communication technique that was dominant in its public discussions. Second would be the institution that was dominant in society. The element that was dominant also became fully developed at the beginning of its epoch in history.
And so, in tabular form, we have the following scheme of civilizations:
name of civilization
|
communication technology
|
institution of power
|
Civilization I
|
ideographic writing
|
imperial government
|
Civilization II
|
alphabetic writing
|
world religion
|
Civilization III
|
printing
|
commerce and education
|
Civilization IV
|
electronic recording and broadcasting
|
media of news and entertainment
|
Civilization V
|
computers
|
the internet
|
This scheme of civilization offends certain people. Some believe that since civilizations have traditionally had a regional association, the idea of worldwide civilizations falls outside their scope of definition; and, really, some other word ought to be used. Others are offended by the idea that popular entertainment, coming in the fourth epoch of world history, would seem to be more advanced, historically speaking, than the literate culture that preceded it. Are those young men and women walking around with headphones and wearing sports jerseys and jeans really more “civilized” than the person who relaxes with a good novel?
In response, let us say that civilizations are not necessarily better or worse than others but merely different. One can recognize the culture of Victorian England - Civilization III - in its emphasis upon literary excellence, education, and commercial enterprise. Today’s popular culture - Civilization IV - would be obsessed with the images, sounds, and personalities that have been broadcast to the masses through motion pictures, radio, and television. They are, indeed, two civilizations.
Societies throughout the world experience a transition from one form of culture to another. As much as some would say, for instance, that modern-day China has a “Chinese culture” inspired by Confucius and others, it also belongs to the international culture of rock ‘n roll music, high fashion, and multinational business. Scholars may debate which scheme of civilization is analytically more useful or is closer to known historical facts.
A SHORT HISTORY OF
CIVILIZATION I
Prehistoric Times
The human race may be related to a prehuman species, or “hominid”, that lived 4 million years ago in east Africa. A critical distinction between hominids and other apelike creatures was the ability to stand upright on two feet, freeing the arms and hands for other uses. Man became a creature that uses hand-held tools. Modern man, “homo sapiens”, was a creature with enlarged cranial capacity related to a hominidic race that developed in Africa about 100,000 years ago. The last Ice Age, which began 75,000 years ago, initially kept most of its population confined to warm climates. Certain groups ventured out into the cold, moving north to Europe and east through southern Asia as far as Australia. Most far-ranging were those peoples, ancestors of the American Indians, who crossed a land bridge connecting Siberia with Alaska perhaps as early as 25000 B.C. but more likely around 10000 B.C. The last Ice Age came to an end in the period between 12000 and 10000 B.C. The earth’s human population, then numbering about 4 million persons, was dispersed to six continents.
The earliest human societies consisted of families and tribal groups engaged in hunting, fishing, and other food-gathering activities. Those late Paleolithic peoples roamed land and sea in search of game. Their garments were made of animal skins and fur. They used chipped-stone tools, including arrowheads and scraping devices, and articles made of bone. The Neolithic revolution, which took place with the waning of the Ice Age, brought the arts of farming, spinning and weaving, pottery manufacturing, bows and arrows, and use of domesticated animals. Agricultural techniques increased the food supply allowing humanity to devote time to other arts. Six to seven thousand years ago, copper implements began to be used instead of implements made of stone. Copper and tin mixed together produced bronze, a more malleable alloy. Iron smelting was introduced about three thousand years ago. With their food planted in the ground, people began to live in settled communities. Population densities increased.
Historians debate whether agriculture was invented in a single place on earth or in several places. Archeologists excavating a site at Abu Hureya in northern Syria have discovered that an abrupt change took place there about 9,500 years ago. Digging into the ground, they found that the soil suddenly changed from brown clay to a mass of black material enriched with plant parts, indicating that a farming village had been built on top of an earlier settlement. The evidence suggests that its inhabitants, returning to this area after a recurrence of cold weather, suddenly possessed a knowledge of more than 150 different species of plant life indigenous to scattered places in the Middle East. Some speculate that the rapid development of agricultural knowledge was due to a “communications revolution” brought on by trade in artifacts and materials including obsidian and marine shells. A site excavated at Catalhoyuk in southern Turkey has revealed a settlement with 5,000 to 10,000 inhabitants which was a center of trade in obsidian. Perhaps 9,000 years old, this bull-worshiping community may have been the world’s first city.
The Earliest Civilized Societies
In the period between 4500 and 3500 B.C., urban settlements appeared in an area reclaimed from the swamp in the lower basin of the Tigris and Euphrates rivers. Another group of cities arose along the Nile. Pressed by natural adversity, these Sumerian and Egyptian peoples drained the swamps, constructed ditches, and irrigated fields. Such an enterprise, requiring a collective effort, produced a ruling class to administer the projects. The governing elite ruled by the authority of a local god; their function was to mediate between it and the community. The earliest Sumerian settlements were the cities of Uruk, Ur and Eridu. Uruk, the oldest, may have been founded around 4300 B.C. It grew sixfold in population between 3500 and 3000 B.C. and came to occupy 1,000 acres. In Egypt, the settlements appeared more suddenly, suggesting Sumerian influence. Settlements appeared both in the lower Delta region and in the Nile valley of Upper Egypt, to the south. The urbanization that began in the 4th millennium B.C. brought the first monarchies, a more stratified type of society, specialized occupations, expanded commerce and trade, ideographic writing, legal and accounting systems, walled cities, large-scale wars, and more elaborate burial arrangements.
Western historians have taught that Egypt and Mesopotamia were the cradles of civilization. However, it is possible that a sophisticated type of society appeared at an earlier date in India. Several verses in the Rig Veda refer to the winter solstice as beginning in Aries, which would be consistent with astronomical conditions in the 7th millennium B.C. We do know, anyhow, that a highly developed civilization existed in India prior to the Aryan invasion of the 16th Century B.C. Archeological ruins excavated at Harappa and Mohenjo-daro reveal the remains of a technologically advanced culture that existed in the 3rd millennium B.C. Its cities were laid out in a regular grid of streets, with ample provision for water supply, drainage, and public baths. Its people enjoyed a diet of wheat and barley, and wore cotton clothing. Its undeciphered script was perhaps of Dravidian origin. Seals found from this period indicate early Shiva worship. Pre-Aryan Indian society expanded from its place of origin in the Indus and Sarasvati valleys to include territories near the Ganges River. This civilization disappeared in the period between 2000 and 1800 B.C. as the Sarasvati river dried up.
The civilization of China developed around 2000 B.C. on the site of an earlier Neolithic culture. Its relatively sudden appearance suggests contact with other cultures. Rulers of the Xia and, later, Shang dynasties established kingdoms in the basin of the Yellow river where they undertook modest irrigation projects. The first city-state, Erliton, was founded in 1900 B.C. As in other places, the social classes became sharply differentiated by wealth. Warring kings fought for territory, and the strongest ones gained imperial dominion over territories in northern China. During the Shang period (16th to 11 centuries B.C.), horse-drawn chariots were introduced to the practice of war. A Chinese script was developed; its inscriptions can be found on oracle bones used for divination. Skilled artisans produced bronze vessels with a distinctive three-legged style. Powerful monarchs, accompanied by sacrificed slaves, were buried in lavish tombs. Chinese peasants cultivated rice as well as wheat and millet. They owned pigs and water buffalo. The Shang monarchy was overthrown in 1027 B.C. by the Chou, a vassal state located in the Wei River valley to the west. Continuing the culture of the previous regime, Western Chou kings ruled first from the capital city of Hao (near Xi’an) until 771 B.C. and then, as the Eastern Chou dynasty, from Loyang until 256 B.C.
Roughly contemporary with early Chinese society, the Minoan civilization flourished on Crete and neighboring islands as a satellite of the Sumerian. A forerunner of Greek civilization, it is known for its naturalistic fresco paintings and ceramic art. King Minos, after whom this society is named, built a palace at Knossos about 2000 B.C. The Minoans, rich in copper, conducted an active trade across the Mediterranean sea with the Egyptians, Greeks, and Lebanese until their civilization was destroyed through a series of natural and man-made disasters around 1200 B.C. Another center of commerce was the Persian Gulf, where trade in grain, oil, copper, textiles, precious metals, and pearls flowed between cities in Mesopotamia, northeast Arabia, and western India during the early 2nd millennium B.C. This trade ended with the demise of the Harappan society in India. The Elamite empire, situated to the north of the Persian Gulf in Iran, was a political and commercial power until it was destroyed by the Assyrians in 640 B.C. The Hittites, in present-day Turkey and northern Syria, were an Indo-European people who established a great empire in the 14th and 13th centuries, B.C. They were first to use weapons made of iron. The Minaean and Sabataean kingdoms of southern Arabia were also important civilizations starting in the late 2nd millennium B.C.
Civilization, defined here in terms of the tendency to build political and commercial empires, came to the other continents later than this. In sub-Saharan Africa, a prosperous trading empire arose in Ghana in the 3rd or 4th century A.D. which lasted for more than a thousand years. This was followed, in the 13th and 14th centuries, by the empire of Mali in the bend region of the Niger river; and by the Songhai empire, in the next two centuries. A Nubian dynasty ruled Egypt in the 8th and 7th centuries, B.C. until pushed back into the Sudan by the Assyrians. There, from its capital city of Meroë, the empire of Kush controlled lands watered by the Upper Nile river until the 4th century A.D., when it was conquered by the Ethiopian kingdom of Axum. The art of iron smelting gave it a military advantage over other African peoples.
In the Americas, a large ceremonial center was constructed at La Florida in Peru around 1700 B.C. This society had already begun to use irrigation and terraced farming. The Andean culture produced excellent textiles, pottery, and metal tools. The Olmec civilization arose independently in southeast Mexico about this time. This society is known for creating huge stone heads and pottery with jaguar motifs. It developed the first Meso-American script. Both New World societies produced new varieties of foods, tobacco, and other useful plants.
The First Mideastern Empires
World history in its first epoch follows the course of progress toward larger forms of political organization. A momentous event was King Narmer’s conquest of Lower Egypt (near the Nile delta) around 3100 B.C. Then king of Upper Egypt, Narmer became the first Pharaoh, wearer of a “double crown”. When we think of Pharaonic Egypt, we envision the massive stone monuments which these people left behind, such as the great pyramids or the Temple of Luxor. The pyramids at Gizah were tombs of Pharaohs belonging to Fourth Dynasty (2613-2495 B.C.). The mummified bodies of these great kings and their attendants were adorned with jewels and were provisioned with food to prepare for eternal life. The pyramids, like the ziggurat temples in Mesopotamia, were artificial mountains whose steps extended towards heaven. They became symbolic of solar rays used by the dead kings to join the sun-god Re. Once considered to be living gods, the Pharaohs later claimed the title “son of Re”. Pharaoh was considered to be a god, begotten by Re on a human mother in a nonphysical act. With some interruptions, pharaonic dynasties gave Egypt political unity and stability for three millennia. Pepi II, who ruled for 94 years, was the last Pharaoh of the Old Kingdom. He died in 2184 B.C.
The rites required for each departed king spawned a new contingent of priests who were a drag on the economy. Local princes, who had once been Pharaoh’s officers, gained a hereditary right to their positions. They took control of the native Egyptian army and were able to thwart Pharaoh’s attempt to regain power with Nubian mercenaries. The local princes reigned until the reestablishment of central government in the Middle Kingdom (1991-1786 B.C.) This pharaonic dynasty, which moved its capital to Thebes, did not revive the burdensome funeral practices of its predecessors. Its rulers built fortresses, not pyramids. However, they were unable to withstand the Hyksos invasion from Syria. Hyksos nomads ruled northern Egypt until 1567 B.C., when Ahmose I reunified the country and established the New Kingdom (1575-1087 B.C.). Egypt became a military power, seeking to control threats from Asia Minor. Pharaohs of the New Kingdom include Ramses II (believed to be the Pharaoh in Exodus), “King Tut” (Tutankhamen), and the religious visionary Ikhnaton. In the 1st millennium B.C., a Libyan regime ruled Egypt, followed by Nubians, Persians and Greeks. Cleopatra VII was the last ruler of the Greek Ptolemaic dynasty. With her death by suicide in 30 B.C., Egypt became a Roman province.
Political unification came more slowly in Mesopotamia. King Urukagina of Lagash (2378-2371 B.C.), conquered the neighboring city of Umma and established the first empire of Sumerian city-states. He was overthrown by another king, Lugalzaggisi, who annexed new territories to the north and west. Lugalzaggisi’s empire was, in turn, conquered by a Semitic-speaking king, Sargon of Agade (2371-2316 B.C.) Sargon’s Kingdom of Sumer and Akkad included most of the territory between the Persian Gulf and the Mediterranean Sea. This dynasty lasted until 2230 B.C., when Gutaean highlanders, infiltrating from the northeast, took control of the empire. Amorite tribes founded the city of Babylon during this period. Gutaean rule (2230-2120 B.C.) came to an end at the hand of a native Sumerian, Utukegal of Uruk. The king of Ur then seized power and established a dynasty which lasted until 2006 B.C. Next, Elamite subjects revolted, sacking the city of Ur. The empire was partitioned among several successor states, including Elam, Isin, Mari, Babylon, and Assyria. King Hammurabi of Babylon (1792-1750 B.C.) reunited most of these provinces in a nine-year military campaign. Though rich in cultural achievements, this Babylonian empire barely outlived Hammurabi’s death. Then, Kassite barbarians attacked Babylon and the empire again splintered.
The Middle East was plagued by frequent wars during the millennium that followed the fall of Hammurabi’s empire in 1743 B.C. When the Hittite king, Mursilis I, sacked Babylon in 1595 B.C., that gave the Kassites an opportunity to take control of the city. Their revived Sumerian empire lasted until 1169 B.C. After expelling the Hyksos kings from Egypt in the mid 16th century B.C., Pharaohs of the New Kingdom conquered lands in Syria and Palestine to forestall future invasions from that region. The Hittites became militarily aggressive in the 14th century B.C. By 1300 B.C., its empire was as powerful as Egypt’s. The two military powers fought for control of Syria. The Hittites defeated Egypt at the battle of Kadesh (1286-85 B.C.) but later reached a peace settlement with the Egyptians partitioning Syria. This may have been the first time in history that two civilized empires went to war against each other. Meanwhile, Assyria was attacking settlements in Babylonia. To the west, Mycenaean Greeks destroyed Minoan palaces on Crete. Though its political structure was weak, the Babylonian civilization which existed during this period was culturally strong. Its mythology, science, and written language permeated the Near East. Even Pharaohs used the Akkadian language when communicating with their Asian subjects.
Nomadic Invasions
A prominent theme of world history in the first epoch was recurring conflict between peoples living in civilized societies and barbarian nomads who preyed on their wealth. The nomads were remnants of preagricultural society who hunted for their food or tended herds of grazing animals. A thousand years of breeding had given them a new military weapon in the form of horses large and strong enough to support human riders. Used to disciplined migrations from one pasture ground to another, these nomads from the steppe were skilled at waging mobile war. Like a vibrating membrane stretched across the Eurasian continent, their raids and migrations from the unsettled interior touched scattered societies in China, India, Egypt, and the Middle East. In periodic incursions, the barbarians would encroach upon lands belonging to the settled peoples, sack the cities, pillage and steal. After attacking and defeating the civilized societies, the barbarian tribes sometimes settled down among them as a ruling class. It was customary then for the conquering barbarians to absorb the conquered people’s culture. If, on the other hand, the civilized society was strong enough militarily, it repelled the invasion.
The barbarian aggression came in waves. There was, for instance, a time of nomadic restlessness in the first half of the 2nd millennium B.C. when Hyksos warriors from Canaan invaded Egypt, Mitanni tribes occupied Mesopotamia, Hittites and Kassites attacked Babylon, and unknown barbarians destroyed the ancient Minoan palaces on Crete. Sanskrit-speaking Aryans invaded northern India, overthrowing the earlier Dravidian society and establishing a caste system. The classical Vedic literature dates from this period. Another wave of barbarian invasion came between 1250 and 950 B.C., as diverse people migrating into the eastern Mediterranean region destroyed the Minoan and Hittite societies and put the Egyptian empire under stress. The attack upon Egypt came from Berbers and Libyans to the west, “seas peoples” from the northeast, Amorites, Philistines, and, perhaps, Israelites. Achaean and Dorian tribes meanwhile attacked the Mycenaean settlements in Greece. The Hittite empire was overrun by Thracians, Phrygians, and Assyrians. There was a third wave in the 8th and 7th centuries B.C., when Cimmerian nomads moved westward and camel-riding Arabs attacked the Assyrian empire. In 6th century B.C., Celtic tribes migrating from northwestern Europe invaded Italy, Greece, and Romania, briefly occupying the city of Rome.
Militarism in the Middle East
As the settled people grew strong enough to withstand these nomadic pressures, the historical focus shifted to military competition between nations. Civilized nations such as the Phoenicians, Chaldaeans, Hebrews, and Greeks were formed from the hordes of people migrating into the east Mediterranean region towards the close of the 2nd millennium B.C. The kingdom of Assyria emerged from the rubble to become the dominant Near Eastern power. With Egypt weakened and the Hittite empire ruined, Assyrian armies in the course of three centuries conquered Aramaean cities in Syria, besieged the kingdom of Urartu (Armenia), destroyed the city of Babylon, and set a puppet ruler on the throne of Egypt. Assyrian treatment of conquered peoples was cruel. A rebellion broke out in Babylonia. Babylonians, Medes, and Persians joined forces against Assyria taking its capital city, Nineveh, in 612 B.C. The Babylonians under King Nebuchadnezzar were briefly the strongest power; then, the Medes. But, before long, a new empire had gained control of the whole region. Cyrus II, king of Persia, supplanted the king of Media in 550 B.C. He then conquered the kingdom of Lydia and, in 538 B.C., the neo-Babylonian empire. Cyrus’ son, Cambyses, conquered Egypt in 525 B.C.
The Achaemenian empire of Persia was the largest, richest, and most powerful political empire seen to date. While adopting the religion of Zoroaster, its rulers followed a policy of religious tolerance towards subjected peoples including the Hebrews. Darius I (521-486 B.C.) seized the throne through assassination of Cyrus’ second successor, Smerdis. Darius divided the empire into twenty satrapies which were responsible for local administration. He added Thrace and northwestern India to its territories and dug a canal between the Nile river and the Red Sea. A system of well-maintained roads connected cities within the empire. A mistake was Xerxes’ decision to invade European Greece in 480 B.C. A coalition of Greek city-states, led by Athens, repelled that invasion. A century and a half later, Macedonian and Greek armies under Alexander the Great, in turn, invaded Persian territory in Asia. Alexander’s army defeated the Persian forces under Darius III at the battle of Isis in 333 B.C. For the next decade, Alexander engaged the Persians and other foes in a series of victorious battles, conquering not only Persian provinces in Iran and Babylonia, but in Syria, Egypt, Afghanistan, and parts of northern India as well.
Alexander’s generals established royal dynasties in these various domains after their leader’s untimely death in 323 B.C. City-states in southern Greece promptly revolted against Macedonian rule but were suppressed. Then the Macedonian generals fought among themselves. Macedonia had to contend both with opposition from the Greek Aetolian Confederation and Celtic migrants from the north before succumbing to the Romans in the 2nd century B.C. Seleucus I acquired most of Alexander’s far-flung territories in Asia. His troops were soon expelled from the Indus Valley by Chandragupta, founder of the Mauryan empire. In the mid 3rd century B.C., the Seleucid empire was further reduced by the Parni barbarian occupation of Parthia and the secession of a Greek province in Uzbekistan. Another of Alexander’s officers, Ptolemy I, founded a dynasty in Egypt and the southern half of Syria. This was perhaps the strongest of the Hellenic dynasties. Ptolemy’s capital at the new city of Alexandria became a center of learning and trade. Seleucid emperors tried repeatedly but failed to wrest southern Syria from Egyptian rule. Conflicts between the southern Greek states and Macedon were equally inconclusive.
Though the successor states to Alexander’s empire were weakened by continual warfare, they effectively spread Greek culture within their vast territories. Starting with Philippi in eastern Macedonia, Alexander and his father, Philip II, together founded more than 300 new cities. These cities were self-contained carriers of Greek culture. Typically, each had its own agora (market), theater, and gymnasium, which were public gathering places. The gymnasium housed intellectual as well as physical activities. Greek culture in the form of visual images, philosophy, and written language became associated with the social elite in each community. The common people tended to stick with their local traditions. Tensions between the hellenizers and local religious traditionalists underlay the Maccabean rebellion in Judaea. While Seleucus, like Alexander, encouraged mixing between the Greek and local peoples, the Ptolemaic regime kept important government posts in Greek hands. Social integration went furthest in the eastern part of Seleucus’ empire which became the Bactrian kingdom. Athens remained the center of philosophy and drama. In addition to the Aristotelian and Platonic schools of philosophy, the Epicureans and Stoics attracted a broad following in the Hellenic world.
While Greek dynasties controlled the domain of Alexander’s conquests, the center of geopolitical gravity meanwhile shifted to the western Mediterranean region. The Greeks had established colonies in Sicily and southern Italy during the 8th and 7th centuries B.C. The Phoenician colony of Carthage in north Africa attacked Greek settlements in Sicily in 480 B.C., but was defeated by an alliance led by Syracuse and Agrigentum. Syracuse made a bid to unify Italy during the reign of Dionysius I (405-367 B.C.). This failed because of conflict with Carthage and other Greek states. Dionysius II invited Plato to Syracuse to apply his political theories. However, in 344 B.C. Timoleon of Corinth overthrew Dionysius II and then forged an alliance between the Greek cities which expelled the Carthaginians from Sicily. After that, Greek fortunes in Italy went into a decline despite military assistance from mainland Greece. The Etruscans, a hellenized remnant of the extinct Hittite civilization, were a rising power in northern Italy during the 7th and 6th centuries B.C. Tarquin kings ruled the city of Rome for more than a century. The Etruscan bid to conquer Italy failed because they, too, were unable to maintain an effective alliance of city states. Additionally, Celtic tribes from beyond the Alps overran their domain.
Rome’s Emergence as a World Power
Once freed of Etruscan rule, Rome concluded peace treaties with Carthage which helped it to wage successful wars of aggression against its neighbors. Roman power doubled through the capture of the Etruscan city of Veii and its territories in 393-88 B.C. A war against the Samnite confederation between 343 and 272 B.C. and defeat of the Latin and Campanian federations in 335 B.C. brought more land under its control. By 264 B.C., Rome had unified peninsular Italy. In the process, the Roman government broke treaties with several states including Carthage. A war between Rome and Carthage between 264 and 241 B.C. brought much destruction but left Italy and most of Sicily in Roman hands. Rome now enjoyed naval supremacy in the western Mediterranean region. A second set of conflicts broke out in 221 B.C. From a base of operations in Spain, Carthaginian armies led by Hannibal marched with elephants across the Pyrenees and Alps mountains into the Po Valley of northern Italy. Hannibal’s armies thrice defeated their Roman counterparts in brilliantly executed battles. In the end, however, the Romans successfully defended against the Carthaginian invaders. Roman armies under Publius Cornelius Scipio counterattacked in Spain and, in 202 B.C., captured Carthage itself.
Rome controlled most lands bordering the western Mediterranean sea at the beginning of the 2nd century B.C. By the end of that century, the eastern shores would be added to its territories. A war broke out between the Greek Aetolian League and an alliance headed by Macedon in 220 B.C. Macedon became allied with Carthage, and Aetolia sided with Rome. After Hannibal’s defeat, Rome, with Aetolia’s help, defeated the Macedonians at Cynosecephalae in 197 B.C. and stripped Macedon of her possessions in southern Greece and Asia Minor. Rome inflicted a similar defeat upon Sparta in the same year. In 192 B.C., Aetolia and the Seleucid empire together went to war against Rome. It took the Romans two years to defeat the Seleucid emperor Antiochus III at Magnesium-under-Sipylus, and three years to defeat Aetolia. The Seleucid empire was forced to cede land to Rome and pay a heavy indemnity. The Aetolian League was effectively finished. Finally, Rome liquidated the kingdom of Macedonia in a hard-fought war lasting from 171 to 168 B.C. Alexander’s homeland became a Roman province. Rome’s reputation as a military power was now so great that a single warning from a Roman diplomat caused the Seleucid emperor Antiochus IV to vacate Egypt in 168 B.C.
Rome’s object then was to cut down potential military rivals; it made no attempt to build its own empire for another hundred years. Rome had won the struggle for geopolitical advantage through a combination of shrewd diplomacy and military might. To gain control of Italian territory, it would make alliances to ensure peace with a powerful adversary while picking off smaller states one by one. Then Rome would jilt the ally when its services were no longer needed. Additionally, its constitution proved attractive to peoples living in politically backward states. The system of dual citizenship helped to harmonize local and imperial interests. Customarily, the Roman oligarchs supported their wealthy counterparts in other states. This assured them of support from a powerful fifth column in enemy nations. Roman society was itself split sharply between the rich and poor classes. The rich acquired their wealth as landowners, tax-farmers, speculators, and government creditors. Increasingly, small farmers joined the ranks of the poor as they were pressed into military service and their neglected farms were picked up by wealthy speculators. To address this injustice, Tiberius Gracchus in 133 B.C. proposed and enacted a law that would limit the size of land holdings. Aristocrats in the Roman Senate assassinated him.
The continuation of plutocratic government led to an economy based on slave labor and private armies comprised of recruits from the poorer classes. The slaves, taken captive in war, were put to work on large plantations that raised cattle and sheep or cultivated olives and grapes. Slave uprisings took place in Sicily, Greece, and on the island of Delos during the late 2nd century B.C. A slave army led by a gladiator named Spartacus overran much of the Italian countryside between 73 and 71 B.C. While serving as Consul, the Roman general Caius Marius raised an army of paupers with the understanding that he would look out after their interests in exchange for military service. Thus began a period of rule by revolutionary warlords lasting from 108 to 30 B.C. A triumvirate which consisted of Pompey the Great, Julius Caesar, and Marcus Crassus took control of the Roman government in 60 B.C. Pompey completed the Roman conquest of Armenia, Syria, and Judaea before losing out to Caesar in a power struggle. Crassus was killed in a battle with the Parthians in 53 B.C. Julius Caesar extended Roman rule to lands north of the Alps in a successful military campaign. He then became sole dictator in Rome and acted in that capacity for two years before being assassinated by two colleagues in 44 B.C.
A new triumvirate emerged consisting of Mark Antony, Lepidus, and Caesar’s adopted nephew Octavian. Octavian defeated Antony and Cleopatra, Egypt’s last queen, in a naval battle at Actium in 33 B.C. He located Caesarion, the teenage son of Cleopatra and Julius Caesar, and had him killed. Now possessing absolute power, Octavian became the first Roman emperor. Octavian, or Augustus Caesar, developed a new form of Roman government based upon a deified state, a professional army, and civil servants. Augustus reformed the system of private armies by employing career soldiers. He replaced private tax collectors and administrators with a “Caesar’s household” of slaves and freedmen to serve as his personal staff. Modestly limiting his own title to “princeps of the Senate”, Augustus nevertheless tolerated the cult of emperor worship which his uncle had begun. He developed a three-step process by which individuals from defeated nations might become Roman citizens. The Roman Senate, a relic of republican government, was keeper of Rome’s traditional ways, but the real power belonged to emperors enjoying the support of the armed forces.
Roman governments had traditionally been reluctant to assume direct responsibility for governing conquered lands. Augustus and his successors organized the Roman empire as an association of autonomous city-states that were restrained from going to war against each other. The central government in Rome provided for their common defense against external enemies. Forsaking further conquests, Augustus sought to establish defensible borders for the empire. His attempt to extend its territory to the Elbe river came to grief when Germanic tribes annihilated three Roman armies at the Teutoburg forest in 9 A.D. The border fell back to the Danube river. Rome’s depopulation which had begun in the 1st century B.C. now limited military options. Between 114 and 117 A.D., the emperor Trajan tried to conquer Armenia, Babylonia, and Mesopotamia from the Parthians. Those expeditions ended in disaster. Trajan’s successor, Hadrian, pulled back to the Euphrates river. Military tension continued along the southeastern border after the Parthian king was overthrown by his vassal, Shahpuhr I, who founded the Sasanian dynasty. Shahpuhr thrice defeated Roman armies, capturing the emperor Valerian in 260 A.D. However, a counterattack delivered by Rome’s ally, the prince of Palmyra, drove the Persians back. The Romano-Persian wars of 337-60 A.D. were likewise inconclusive.
The Julian dynasty of emperors came to an end in 68 A.D. with the death of Nero. After three decades of military rulers, Rome was governed by five “wise and temperate” emperors whose combined reign lasted until 180 A.D. The last, Marcus Aurelius, is remembered for a book of philosophical Meditations. The reign of his son, Commodus, marked the beginning of a series of despotic emperors and military usurpers who were generally hostile to Christianity. Many lasted for only a year or two. The exceptions were Diocletian (284-305 A.D.) and Constantine I (306-337 A.D.) These two able emperors created a mobile army to deal with insurrections and invasions, restored the debased Roman currency, resurveyed the land and adjusted taxes. Constantine I split his vast empire into two administrative districts. He established a new capital city for the eastern half in 330 A.D. It was called Constantinople. The period between 250 and 311 A.D. had seen an intense effort by Roman emperors to suppress Christianity. However, Galerius rescinded an anti-Christian edict on his death bed in 311 A.D. Two years later, Constantine adopted a policy of religious tolerance. Alaric, king of the Visigoths, sacked Rome in 410 A.D. The last western emperor, Romulus Augustulus, was deposed in 476 A.D.
Hunnish and Scandinavian Eruptions
The western Roman Empire, headquartered in Rome, showed signs of internal weakness in the late 4th century A.D. Big landowners diverted the peasants’ agricultural surplus from the tax collector to themselves. The government became a dictatorship under the control of the military high command. After the Visigoths defeated the Roman armies at Adrianople in 378 A.D., Rome’s European frontier was severely exposed. The Visigoths had been driven into Roman territory by advancing hordes of Alanic and Ostrogothic tribes who had, in turn, been dislodged from their east European homelands by Huns moving west. East Germanic tribes broke through the Roman defenses on the Rhine river around 406 A.D. The Vandals from Jutland traveled across southern Europe to Spain and, in 429 A.D., crossed over to north Africa where they established a maritime empire. Ostrogoths and Lombards spread havoc in Italy. Angles, Saxons, and Jutes occupied Britain. With the West Roman Empire ravished by other barbarian tribes, Attila and his hordes invaded northern Italy in 452 A.D. Reportedly, Pope Leo I persuaded him not to attack Rome. Attila withdrew from Italy and died a year later.
Chinese society had felt Hun pressure a full century before Rome fell. In 316 A.D., an attacking horde of Hsiung-nu (Hun) barbarians overthrew the Western Chin dynasty and partitioned northern China between several successor states. A half century later, another Hun tribe migrated into the territory between the Don and Volga rivers, dislodging the Visigoths and setting in motion the migration of Germanic tribes across Europe. A remnant of Attila’s horde settled down in western Hungary. Ephthalite Huns defeated and killed the Sasanian emperor Peroz in 484 A.D., forcing the Persians to pay tribute to them for almost a century. Then an alliance between Persians and Turks overthrew the Ephthalite empire and partitioned its territories. White Huns attacked and shattered the Indian Gupta empire in 455 A.D. Most of this empire fell within ten years, although a remnant of the Gupta dynasty continued in Bengal until 544 A.D. Descendants of the Hunnish warriors, converted to Hinduism, survived in the Rajput aristocracy which dominates the state of Rajasthan in northwestern India.
The Frankish king Clovis began to build a Gallic empire in the late 5th century A.D. He conquered the Alamanni tribe along the Rhine in 496 A.D., defeated the Spanish Visigoths in 507 A.D., and by the time of his death in 511 A.D. controlled all of Gaul except Provence. His successors annexed Thuringia and Burgundy. Clovis and his heirs embraced the Roman Catholic Christianity unlike most other Germanic kings who had converted to the Arian faith. After acquiring a huge domain, the Merovingian dynasty of Clovis became internally weakened because of its practice of dividing territory among several heirs upon a monarch’s death. The Arnulfing family, majordomos in the Merovingian household, effectively ran the government. One of its members, Pippin III, requested that Pope Zacharias recognize his family’s claim to the throne. Upon obtaining a favorable response to this request, Pippin deposed the Merovingian king and began his own Carolingian dynasty. When the Lombards captured Ravenna in northern Italy and threatened to take Rome, Pope Stephen II requested Frankish aid. Pippin sent troops to Italy and defeated the Lombards in 756 A.D.
Pippin’s son, Charles, became sole ruler of the Franks in 771 A.D. when his brother, who was co-ruler, unexpectedly died. Charles, today known as Charlemagne, annexed the Lombard kingdom in Italy in 773-74 A.D. He exterminated the Avars in Hungary between 791 and 805 A.D. A more difficult military task was the conquest of Saxony between 772 and 802 A.D. This brought Charlemagne’s empire in direct contact with the Danes, who responded by launching naval raids upon its territories. The empire now encompassed most of present-day Germany, France, Italy, and the Benelux countries. For all practical purposes, Charlemagne had revived the West Roman Empire. In recognition of that fact, Pope Leo III crowned him Emperor of the Romans on Christmas Day 800 A.D. Because the East Roman emperors retained the right to that title, Charlemagne made certain territorial concessions to Byzantium to obtain its recognition and consent. His empire lacked a corps of literate administrators, so Charlemagne brought in the Northumbrian cleric, Alcuin, and others to establish palace and cathedral schools. Itinerant inspectors kept a close watch on local officials. After Charlemagne’s death in 814 A.D., these officials assumed power. Charlemagne’s heir, Louis the Pious, divided the empire among his three sons. Problems grew worse with Viking and North African pirates.
The Viking eruption of the 9th and 10th centuries A.D. was a consequence of Charlemagne’s conquest of Saxon territory, which had been a buffer zone between the Scandinavian barbarians and Roman civilization. Its first casualties were Christian monasteries along the coast of Britain, Ireland, and France. In 880 A.D., Rhos adventurers from Sweden on the Baltic Sea navigated the inland waterways of Russia to reach the Caspian Sea. Another group of Scandinavians sailed across the North Atlantic ocean to Greenland, Iceland, and Vinland in North America. Despite their ferocity, the king of West Francia, Charles the Simple, invited Scandinavian seafarers in 911 A.D. to settle the coast of Normandy on condition that they converted to Christianity. King Alfred of England had made a similar proposition to Danish invaders in 878 A.D. The Scandinavian immigrants proved receptive to the Frankish culture and religion. The seeds of several future European nations were sown in this period. The modern nation of France took shape as the Counts of Paris successfully defended against Scandinavian attackers between 885 and 887 A.D. Rhos Swedes settling at the cities of Novgorod and Kiev gave Russia its name. In 1066 A.D., descendants of the Norman settlers successfully invaded England. English dynastic history usually begins with this event.
Continuation of the Roman Empire in the East
The East Roman empire, headquartered in Constantinople, did not fall when Germanic barbarians overran the western territories controlled by Rome. This government was staffed with professionals loyal to the state rather than big landowners. It had a citizen rather than mercenary army. The East Roman emperors of the 5th century A.D. were able administrators and legal reformers who kept state finances under control. They built a wall around their capital city and made strategic concessions to the barbarian invaders that allowed them to survive. Among the Byzantine emperors, the best-known may be Justinian I (527-565 A.D.) who built the church of Hagia Sophia and codified Roman law. This emperor also reconquered northwest Africa from the Vandals, drove the Ostrogoths from Italy and Illyricum (Dalmatia), and restored Roman naval superiority in the Mediterranean sea. However, the 26-year campaign against the Ostrogoths drained the imperial treasury. Ruinous taxes were imposed upon the Levantine provinces. Lombard tribes invaded Italy seven years after the Ostrogoths had been expelled. During the war of 572-91 A.D. against the Sasanian Persian empire, Slavs and Avars entered the Balkan provinces unopposed. Though later expelled, the Slavs returned during the Romano-Persian war of 604-28 A.D. This time, they stayed.
In 633 A.D., Islamic armies led by Mohammed’s successor Abu Bakr attacked both the East Roman and Sasanian empires, exhausted from their recent war. The Persian empire was destroyed. The East Roman empire survived with great loss of territory. The Arabs laid siege to Constantinople in 674-78 A.D. and again in 717-18 A.D. but were unable to penetrate its walls. Another military threat came from the Slavic settlers in the Balkan peninsula after Turkish-speaking Bulgars occupying land between the Danube river and Black Sea had founded a rival state. The Romans and Bulgarians competed for the allegiance of Slavic peoples in that region. Constantine V was unable to destroy the Bulgarian state in a twenty-year war but the Byzantine empire did later subjugate most Slavs living on the Greek peninsula. Another lengthy war was waged against Paulician Christians in the northeast. The result of the frequent warfare was depopulation. This worked to the advantage of the peasants who comprised the militia defending the empire against Arab raids. After the loss of Sicily to the Moslems, the East Roman government needed to prevent its Sicilian and Bulgarian opponents from making contact. However, it made few attempts to recover its former possessions in the Mediterranean other than Crete.
Through that turmoil, the East Roman empire developed a distinctive culture that combined Greek and Slavic elements. Its society retained the religion shared with west Europeans but reverted to exclusive use of the Greek language. In the 4th century A.D., the Cappadocian Christian fathers, St. Basil, St. Gregory of Nyssa, and St. Gregory Nazianzene, produced a body of neo-Attic Greek literature which became a model for future writings. A Syrian Jewish convert to Christianity, Romanus the Composer, was instrumental in creating the Byzantine style of music and liturgical poetry. A cultural renaissance took place during the 9th century administration of Photius, Patriarch of Constantinople, who sent scholar-missionaries to Khazar Turks in the Ukraine. They brought with them the Glagolitic alphabet which had been developed for Slavic peoples in Greece. The Khazars were committed to Judaism, so the Byzantine missionaries moved on to the Slavic principality of Great Moravia (Czechoslovakia). When the Frankish church cracked down on this mission, refugee clergy next went to Bulgaria. Here a new script, Cyrillic, was developed as a simpler alternative to the Glagolitic alphabet. This script was used by subsequent Slavic converts to Orthodox Christianity.
As the first millennium A.D. drew to a close, the Byzantine Greek culture and religion had spread northward into Russia, despite its earlier settlement by Swedes. Prince Vladimir of Kiev was baptized in this faith in 989 A.D. Vladimir then married Emperor Basil II’s sister Anna. His religious conversion brought Greek art and liturgy into Russia, along with the Cyrillic alphabet. Meanwhile, high taxes combined with crop failure in the severely cold winter of 927-28 A.D. forced many peasants to sell their land to big landowners. Landlord-aristocrats in Asia Minor, supported by the peasants, engaged in five insurrections against the imperial government between 963 and 1057 A.D. The government launched military offensives against Moslem bases in Sicily and Crete and, with the help of mercenary soldiers, finally conquered Bulgaria. However, this 40-year war was financially ruinous. The peasant militia which had served the empire so well in defensive actions was not motivated to fight for imperial expansion. Not long after the Roman government reoccupied Syracuse, adventurers from Normandy captured key positions in southern Italy. Saljuq Turks who had been menacing Armenia took Emperor Romanus IV prisoner in 1071 and soon controlled most of the empire’s former territories in Asia Minor.
The East Roman empire was now under attack from Norman Christians as well as Ghuzz barbarians and Saljuq Turks of the Rum kingdom. The First Crusade (1095-99) brought western Christian armies to Constantinople. Emperor Alexis I tried to enlist their help in ousting the Turks but the European princes were mainly interested in capturing Jerusalem. A Latin Kingdom of Jerusalem was established in 1099. However, Saladin took the city back less than a century later. The Third Crusade (1189-92), undertaken in response to that event, failed to retake the city from the Moslems. After Western businessmen were massacred in Constantinople, the Normans retaliated by sacking Thessalonica. Serbia and Bulgaria threw off Byzantine rule. The Fourth Crusade (1202-04) was diverted from its original purpose into a scheme to overthrow the imperial dynasty. An army of Venetians and French crusaders assaulted, captured, and looted Constantinople in 1204. The Venetians took valuable land possessions, while a Frenchman, Baldwin I, became Emperor of Constantinople. Seceding Greek city-states in Asia Minor then set up their own empire at Nicaea. The Nicaean Greeks and Bulgarians together laid siege to Constantinople, which fell in 1261.
The Nicaean Greeks regained Constantinople at the cost of losing most of their Asian lands to the Ottoman Turks. Serbia was also becoming a major power in Europe. If that were not enough, a civil war broke out within the East Roman empire between 1341 and 1347 reflecting both theological disagreements and conflict between large and small landowners. The empire was doomed. Emperor Michael VIII Palaeologus realized that he needed the support of western Christians. He and several successors recognized the Pope’s ecclesiastical authority. An Act of Union with the western church was signed by the Emperor and top religious officials at Florence in 1439. However, the mass of Eastern Orthodox clergy and laity rejected this agreement. Most Greeks preferred Ottoman rule to domination by western Christians. The Russian Orthodox hierarchy repudiated the Metropolitan who had signed the agreement in Florence and replaced him with a Russian native. The Turks meanwhile tightened their land blockade. The East Roman empire came to an end when the Ottoman Turks captured Constantinople in 1451 A.D. The Turkish rulers gave the Patriarch of Constantinople political authority over the non-Moslem communities. Greeks played a leading role in the political and commercial life of the ensuing Ottoman empire.
Parthian, Kushan, and Sasanian Empires
When Roman power was at its peak in the 2nd century A.D., four contiguous political empires controlled much of the Old World. Besides Rome, there were the Parthian empire in Persia, the Kushan empire in Afghanistan and northwestern India, and the eastern Han empire in China. These four empires extended across north Africa and Europe through southern Asia to the Far East. To their north was a wilderness extending from Scandinavia and Germany to Mongolia and Siberia; to their south, the Saharan and Arabian deserts, southern and eastern India, southeast Asia, and Indonesia. Rome and China, at the extremities, were dimly aware of each other’s existence. The Parthian and Kushan empires, occupying a middle position, were in direct contact with the others. Their merchants acted as middlemen for overland trade. This was the culmination of CivI. Each of the four empires was ruled by hereditary monarchs exercising nearly absolute power. Each empire, representing a consolidation of political and military power among warring kingdoms, brought peace and stability to its region. This situation came to an end with the Hun uprisings, which overthrew the Eastern Han dynasty of China in the 3rd century A.D. and destroyed the West Roman empire two centuries later.
Of the four empires, only the Chinese had not been touched by Alexander’s conquests in the 4th century B.C. The Roman, Parthian, and Kushan empires were heirs of Greek civilization. All three fell at least partially within the vast territory which Seleucus had inherited from Alexander but could not retain. Long afterwards, Greek culture continued to have a strong influence in those places. Mixed with local traditions, it became an element in the syncretizing process of creating world religions. Bactria (northern Afghanistan) went farthest with the hellenizing process. The Kushan empire, located there, became a cultural cauldron in which Greek philosophy and visual art transformed Buddhism into a religion of personal images. The realism of Praxiteles was applied to images of the divine. It was this Buddhism, in the Mahayana form, which penetrated China beginning in the 2nd century A.D.
As Rome conquered western lands possessed by the Greek dynasties, so the Parthian and Kushan empires began with nomadic invasions in the eastern part of the Seleucid empire. Parni nomads from Türkmenistan, led by Arsaces, freed themselves from Seleucid rule around 250 B.C. and established the Parthian kingdom in northeastern Iran. Kin to the Scythians, they were horsemen and archers of great ability. In 141 B.C., the Parthians under Mithridates I conquered Media and Babylonia from the Greeks. They took the Seleucid emperor Demetrius II prisoner when he tried to regain the lost territory. The Arsacid dynasty moved its capital to Ctesiphon, a suburb of Seleucia-on-Tigris. The Romans fared no better against Parthian arrows. An army led by Marcus Crassus was annihilated when it invaded Mesopotamia in 53 B.C. Trajan’s attempt in 114-17 A.D. to annex Armenia, Mesopotamia, and Babylonia ended in disaster. Neither, however, were the Parthians able to make significant inroads into Roman territory. Hadrian set the Roman Empire’s eastern boundary at the Euphrates river. The Arsacid Parthian dynasty lasted until 224 A.D. when its last emperor, Artabanus V, was overthrown and supplanted by his Persian vassal, Ardeshir I, founder of the Sasanian (Second) Persian empire.
The Kushan empire was formed in 48 A.D. with the invasion of northwest India by another nomadic people, Kushans or Yüeh-chih, living in Bactria. The process began when a Greek governor of Bactria seceded from the Seleucid empire in 250 B.C. and established a separate kingdom. Exploiting a power vacuum with the collapse of the Mauryan dynasty, the Bactrian Greeks seized parts of northern India around 200 B.C. However, the Greek princes fought among themselves. In a weakened state, their kingdoms were overrun sixty years later by Saka (Scythian) nomads driven southward by the Yüeh-chih from Gansu in western China. Although the neighboring Parthians were also attacked, they managed to divert the Sakas to an area in southern Afghanistan from which they overran Greek settlements in the Indus Valley. The Parthians subsequently imposed their rule upon the Indian Saka states. Around 100 B.C., the Yüeh-chih invaded and occupied Bactria, then under Saka rule. Part of this tribe, the Kushans, moved into the Indus Valley in the 1st century, A.D., conquering both the Partho-Sakas there and an independent Saka state farther south. Their empire thus encompassed Bactria and northwest India on both sides of the Hindu Kush. During the nearly two centuries of its existence, the Kushan empire was a bridge between the Indian and Chinese cultures.
After Ardeshir I overthrew and supplanted the Parthian Arsacid dynasty in 224 A.D., Sasanian Persia attacked the Roman empire’s eastern provinces but was driven back from all but Armenia. The Kushan empire fell to Ardeshir’s armies in 241 A.D., though a remnant may have lasted in the Kabul Valley until the 11th century A.D. Before gaining political power, the Sasanid family had been hereditary priests of Anahita, an Iranian water goddess later associated with Ahura Mazda, the Zoroastrian chief god, in the Magian cult. Zoroastrianism in Magian form became the official religion of the Sasanian empire. However, another major religion, Manichaeism, appeared during the reign of Shahpuhr I (242-273 A.D.), when a Persian prophet named Mani was given permission to preach. Kartir, a Zoroastrian priest seeking to uphold Zoroastrianism as the Sasanian state religion, persuaded Shahpuhr’s second successor, Vahram I, to arrest and execute Mani. The Sasanid emperors saw themselves as successors to emperors of the pre-Greek Achaemenian dynasty who had made Persia a world power. Their military aggression directed against the Romans, Armenians, Kushans, and others was intended to restore the Persian empire to its former greatness.
The Sasanian and Roman empires were engaged in a military and religious struggle for nearly four centuries. The Romano-Persian war of 337-60 A.D. ended inconclusively. The Roman emperor Julian was killed while invading Persia in 362. His successor, Jovian, had to cede five Armenian provinces to Persia in order to extricate the Roman forces. Christians living in Persia were suspected of being a Roman fifth column. The reverse was true, from a Roman perspective, of Manichees living in the Roman empire. Shahpuhr II began persecuting Christians in 339 A.D. The persecution was lifted a half century later. In 440 A.D., Emperor Yazdigerd II ordered all his subjects to convert to the Zoroastrian religion, causing a series of revolts in the Armenian provinces. In 484, Ephthalite Huns occupied eastern territories belonging to the former Kushan empire. Emperor Peroz was killed in battle, and the Persians had to pay tribute to the Huns. This military disaster caused a social revolution in Persia. A communistic sect of the Manichaean religion, Mazdakism, stirred the poor masses in opposition to the Zoroastrian clergy and wealthy noblemen. When emperor Kavadh I converted to this religion, its program was put into effect.
One of Kavadh’s sons, later Khusro I, persuaded his father to disavow Mazdakism. He then proceeded to crush this sect. As emperor, Khusro I (531-79 A.D.) decentralized the military and instituted certain economic reforms to alleviate conditions that had caused the Mazdakite movement. Allied with the Turks, he overthrew the Ephthalite empire in 563-67 A.D. It was partitioned along the lines of the Oxus river. In 572, Khusro began a war with the East Roman empire which lasted for 18 years. The war’s unpopularity caused his son and successor, Hormizd IV, to be murdered. The East Roman emperor, Maurice, unseated the Persian usurper and put Hormizd IV’s son, Khrusro II, on the throne. Emperor Maurice then was killed in a mutiny. To avenge his benefactor’s death, Khrusro II invaded the East Roman Empire. This last Romano-Persian war, lasting from 604 to 628 A.D., was the bloodiest of all. It ended, upon Khusro II’s death, with a treaty restoring territories to the situation before the war. Weakened by this conflict, the Sasanian empire was in no shape to withstand the Arab armies which attacked Persia in 633 A.D. The Islamic conquest was complete by 651.
India
The Aryan conquerors of India built a new society on the ruins of an earlier civilization when they invaded this region in the middle of the 2nd millennium B.C. A system of hereditary castes determined its class structure. Brahman priests controlled the rituals believed necessary for a prosperous and healthy life. There was a rich religious literature written in the Sanskrit language consisting of hymns, rituals, poetic narratives, and philosophical discussions. A second wave of nomadic immigrants entered northwest India around 600 B.C. The political center of gravity moved eastward from the Indus to the Ganges valley. A group of small kingdoms there was ruled by descendants of the Aryan warriors. The two strongest were Kosala (Uttar Pradesh) and Magadha (Bihar) in the northeast. The ruler of the Magadha kingdom, Bimbisara, attempted to create an empire. It was in this environment of small warring states that the religious thinkers Mahavira and Buddha lived and preached during the late 6th century B.C. In 518 B.C., the Persian emperor Darius I invaded and annexed the western part of the Indus Valley. In 478 B.C. Prince Vijaya sailed from Gujarat to the island of Sri Lanka where he founded a Singhalese kingdom. Alexander the Great penetrated deep into the Indus Valley in 327-25 B.C. and left several garrisons.
Around 322 B.C., Chandragupta I, founder of the Mauryan empire, expelled the garrisons which Alexander had left in northwest India. He went on to conquer the kingdom of Magadha. In 305 B.C., Seleucus I attempted to recover the lost Indian territories but was defeated by Chandragupta’s army. (After making peace with the Indians, Seleucus purchased 500 war-elephants for use in a forthcoming campaign against Antigonus I of Macedon.) Chandragupta received certain Greek territories in exchange. His grandson, Asoka, conquered the southeastern kingdom of Kalinga in 261 B.C. Asoka’s empire now included most of the Indian subcontinent with the exception of the southern tip. After defeating Kalinga, the emperor suddenly repented of further conquests and became a lay member of a Buddhist order. He spent his remaining years promoting Buddhism and issuing moral edicts. His government was an intrusive, authoritarian bureaucracy bent on ethical reform. It tried to curb wasteful rituals and improve economic efficiency. Much of what we know about Asoka comes from multilingual inscriptions in stone slabs which he placed about his realm. The Mauryan empire began to disintegrate not long after Asoka’s death in 232 B.C. and was extinguished in 185 B.C.
India was again divided into warring kingdoms during the next five hundred years. During the 2nd century B.C., Greek princes of Bactria occupied a part of northern India until Saka nomads overran their territories. The kingdom of Kalinga regained its freedom and became militarily aggressive. The Sunga dynasty, founded by the general who had assassinated the last Mauryan emperor, took possession of Bihar and Uttar Pradesh including Pataliputra, Asoka’s former capital city. Another dynasty, the Andhras, controlled most of the Deccan in south-central India. Saka principalities, satraps of the Kushan empire, occupied the west coast of India, south of the Indus Valley. The Kushan empire itself controlled the northwest region. This empire and the Andhra kingdoms were both destroyed around 224 A.D. A period of political instability ensued. During this time, Indian culture and religion were undergoing a major transformation. Sanskrit literature experienced a revival under the Sunga and Kanva dynasties (185-27 B.C.) Mahayana Buddhism, a savior religion, developed from the original Buddhist teaching. The classic Tamil writings on ethics and statecraft were composed. New gods were added to the Hindu pantheon as Brahman authority was affirmed.
During the 4th century A.D., a fortuitous marriage reunited northern and southern Bihar. The Gupta dynasty thus began with the rule of another Chandragupta in 320 A.D. His son, Samudragupta, and grandson, Chandragupta II, enlarged its domain in the Jumna-Ganges basin and conquered the western Saka satrapy with its capital at Ujjain. This empire included northern India east to west but did not extend south beyond the Vindhya mountains. Though territorially less extensive than the Mauryan, the Gupta empire was no less culturally distinguished. The Guptas were Hindu Brahmans who were tolerant of other religions. Samudragupta surrounded himself at court with accomplished artists and scholars. Indian sculpture, literature, and astronomy then reached new heights. The Sanskrit poet and playwright, Kalidasa, lived during this time, as did Vatsyayana, author of the Kamasutra. The Laws of Manu, written around 400 A.D., are the classical expression of Hindu law. The game of chess was invented and the so-called “Arabic” numerals were first used. White Hun invasions between 455 and 544 A.D. extinguished this culturally brilliant society although it was briefly rekindled during Emperor Harsha’s reign in the early 7th century A.D.
The Gupta dynasty ruled India’s last indigenous empire. Thenceforth the prevailing pattern was that of foreign invaders from the north seeking to penetrate the Indian subcontinent and being assimilated by the Hindu culture. The Ephthalite Huns overran territories in the Oxus-Jaxartes basin. When Persians and Turks overthrew this northern kingdom in 563-67 A.D., many Huns migrated to India where their descendants, the Rajputs, became hereditary princes. Emperor Harsha reunited northern India in 606-12 A.D. His bid to expand southward was defeated by Palakeshin II of the Chalukya dynasty in 620, which was, in turn, defeated in 642 by the rival Pallava dynasty of southeast India. The Tamil-speaking Pandya kingdom continued to hold the southern tip of India throughout this period. Indian refugees from the Huns brought Hindu and Buddhist culture to southeast Asia and Indonesia. Tibet came within India’s cultural orbit after an Indian script, adapted to the Tibetan language, was used to translate Mahayana Buddhist texts into Tibetan. That happened when a Tibetan army, incited by a Chinese diplomat, successfully invaded India to punish a usurper after Harsha’s death in 647 A.D. and was “captured” by the Indian culture.
Indian political history following the decline of the Gupta empire is complicated by regional compartmentalization and a plurality of states. A Chinese Buddhist who visited India in the 7th century A.D. reported seventy different kingdoms. In the southeast, the Pandya and Pallava kingdoms were the dominant powers until the 10th century. The Chola kingdom, which defeated the Pallavas in 897, took control of the south-central region for the next three centuries. They were in the best position to reunite Hindu India in the period when the Moslems were encroaching upon Indian territory from the northwest. However, the Chalukyas, to the north, engaged the Chola empire in a protracted struggle until both sides were exhausted. That left the door open to the Moslems. Rajput clans, descendants of the White Hun invaders, controlled northern India after King Harsha’s death. The Chalukya dynasty in Maharashtra governed the Deccan (south-central) region from the mid 6th century until 752, when they were overthrown by the Rashtrakuta, previously a tributary state. This dynasty lasted until it was, in turn, overthrown in 973 by Taila II, who revived the Chalukyan empire. During the 8th century, two new dynasties appeared in northern India, the Pratriharas of Rajasthan and Palas of Bengal, which lasted until the 11th and 12th centuries.
Moslem armies which had overrun southwest Asia reached India in 711 A.D. and seized lands in the lower Indus valley. The Hindu kings made no serious move to evict them. The Turkish Emir of Ghazni defeated a coalition of Indian princes in 991 and extended Moslem rule to include lands east of the Khyber Pass. His successor, Mahmud, pushed the frontier forward to Lahore and conducted raids into the Jumna-Ganges basin and in Gujarat. Then, Ghoris from Afghanistan, who had been converted to Islam in 1010 A.D., supplanted the Ghaznavid dynasty. Moslem armies completed their conquest of the Jumna-Ganges basin and Bengal between 1192 and 1202 A.D. Muhammad Ghori appointed a slave-viceroy who ruled his kingdom until a ruler of the Khwarizm, ex-vassals of the Saljuq Turks, ended that dynasty in 1215. Though India escaped Mongol destruction, the Mongols’ self-styled successor, Tamerlane, sacked Delhi in 1398-99 and slaughtered 80,000 inhabitants. Previously, the Moslems had conquered the Deccan and attempted to move the capital of their empire from Delhi to that region. Islamic states in the Deccan became an independent empire ruled by the Bahmanid dynasty. These broke up into five states in the period between 1482 and 1512 A.D. Several of these states formed an alliance which overthrew the Hindu empire of Vijayanagar in 1555 A.D.
A descendant of Tamerlane, Babur, invaded northern India from Afghanistan in 1525, where he established the Mogul dynasty. Babur’s son, Humayun was evicted from India, but he successfully reentered in 1555. Humayun’s son, Akbar, expanded the empire, created an efficient administration, and promoted reconciliation between Moslems and Hindus. Some of Akbar’s successors took a less benign view of subjected peoples. Aurangzeb reimposed the poll tax on non-Moslems, put a Sikh guru to death, and provoked a rebellion among the Rajputs. He also imposed Mogul rule upon the independent Moslem states in the Deccan and down to India’s southern tip. A Hindu counteroffensive emerged in the form of the Maratha light cavalry which conquered Mogul territory and reestablished a Hindu kingdom under their leader, Shivaji. After Aurangzeb’s death in 1707, the Mogul empire rapidly disintegrated. Great Britain and France fought for commercial domination of India. Robert Clive’s victory over the French at the battle of Plassey in 1757 decided that contest in favor of the British. The British East India Company became de facto rulers of Bengal, Bihar, and Orissa when they assumed responsibility for provincial revenue collection on behalf of the Mogul empire in 1757-65. A century later, the government of India was transferred to the British crown.
China
The Far Eastern Chinese society, in contrast to India’s, maintained a large degree of political unity during more than two thousand years following the creation of its first political empire in 221 B.C. The Xia, Shang, and Western Chou pre-imperial dynasties, lasting until 771 B.C., were kingdoms in northwest China which enjoyed hegemony over neighboring states. The Eastern Chou dynasty, headquartered at Loyang, continued until 256 B.C. During that time, a number of large states arose at the periphery of the empire. The Chou ruler was reduced to ceremonial functions. After the central government lost control of its vassals, these states went to war against each other. Their number decreased from three hundred to twenty. By 506 B.C., there were seven large states surrounding the city of Loyang. They fought for control during the three centuries between 506 and 221 B.C. known as the “period of the warring states.” Alliances were formed and broken. After 453 B.C., the states improved their armies by replacing hereditary officers with ones of proven ability. Prince Hien of Ch’in militarized the peasant class. War was transformed from a contest between chariot-riding aristocrats to massive infantry battles. In the final phase, between 230 and 221 B.C., the kingdom of Ch’in conquered its rivals.
The King of Ch’in, Shih Hwang-ti, became the first Chinese emperor. Embracing the Legalist philosophy, he was determined to improve society by issuing and enforcing laws. This emperor replaced the hereditary nobility with appointed officials, set up a system of provincial administrations, adopted standard weights and measures, standardized the Chinese script, began construction of the Great Wall to protect the northern border, and created a centralized civil service. His government established the legal framework for peasants to own and transfer land. Its army acquired crossbows and replaced chariots with cavalry. In keeping with his strict reforms, Shih Hwang-ti burned books from schools of philosophy other than Legalism and even proposed burying their scholars alive. The result was to create a unified nation which was organized by an unambiguous set of principles. On the other hand, the abrupt creation of a national bureaucracy and suppression of competing philosophies antagonized those who had previously enjoyed favor and power. Its population depleted from war, the peasants were further oppressed by taxes and corvées. The Ch’in empire, too ambitious in its reach, lasted only thirteen years. The first emperor died in 210 B.C. while on an inspection tour. A general insurrection took place a year later, aimed at restoring the old order.
Liu P’ang, founder of the Han dynasty, was the winner in the civil war that followed. Instead of reversing the first emperor’s policies, he continued them in a more moderate form. Liu P’ang dismantled the fiefs by requiring that all sons, not just the oldest, inherit their father’s lands. Repudiating the Legalists, he promoted first the Taoist and then the Confucian philosophy. In 196 B.C., Liu P’ang ordered the imperial districts to send their brightest young men to the capital to be selected for administrative posts by passing an examination. A subsequent emperor, Wu-ti, based the examination upon knowledge of the Confucian classics. The structure was in place for a system of government that served most subsequent dynasties. Although Liu P’ang reappointed lesser nobility, these titles were rewards for faithful government service and could be revoked. The real power was held by the emperor, on one hand, and the Confucian bureaucracy, on the other. This bureaucracy consisted of many separate departments and overlapping functions, which constituted a checks-and-balances system. Even the emperor’s conduct could be criticized by an official known as the “censor”. The emperor’s household had its Inner and Outer courts, including relatives, eunuchs, harems, and high officials.
The Han dynasty, founded by Liu P’ang, is divided into a Western Han (141-31 B.C.) and an Eastern Han (25-220 A.D.) period. The Confucian scholar-administrators established themselves in the first period as a privileged class. They effectively controlled the imperial government and took it upon themselves to decide as well whether a dynasty still enjoyed the Mandate of Heaven. Additionally, the Confucian administrators used their governmental positions to enrich themselves by taking land from the peasants. Under the Han dynasty, peasants could be forced to contribute one month’s free labor to the government and was subject to two years’ conscription for military service. As in Roman society, peasant-farmers who neglected their land while fighting wars often lost it to rich speculators. The new class of Confucian landlords added to the burden. An imperial decree issued in 6 B.C. proposed to limit individual landholdings, but the administrator-landlords made sure that the decree was not carried out. The Western Han dynasty fell in 9 A.D.
Wang Mang, a relative of the royal family, usurped power and tried to carry out agrarian reform. He, too, was stymied by the Confucian bureaucracy. Peasant armies known as the “Green Woodsmen” and “Red Eyebrows” staged an uprising in the Shandong province. In 25 A.D., a powerful landlord and warlord named Kwang-wu restored the Han dynasty and suppressed the peasant revolt. Since its capital was moved from Changan east to Loyang, this became known as the Eastern Han dynasty. The Confucian bureaucrats remained in power. Not surprisingly, the same problems that had bedeviled the Western Han dynasty resurfaced. Rents were raised on the peasants. The imperial examinations were conducted dishonestly. Many peasants took refuge on the big landowners’ estates while others fled to southern China. In 184 A.D., a Taoist physician organized a nationwide peasant revolt known as the “Yellow Turban” rebellion. This lasted nine months before it was crushed by an alliance between big landlords and the regular army. The Eastern Han empire split into three kingdoms controlled by warlords in 220-22 A.D.
A period of civil disorder followed which lasted more than three centuries. Mahayana Buddhism entered China. The warm, marshy southern region attracted an influx of population. The Chinese empire was briefly reunified in 265-80 A.D., but fell apart ten years later. Then nomadic barbarians invaded northern China and established kingdoms there. A branch of the Chin family reestablished the (Eastern) Chin dynasty in southern China. Five imperial dynasties held that region, including north Vietnam, against barbarian attacks from the north. By 439 A.D., the T’o-pa “Wei” dynasty had conquered all the other kingdoms in northern China. Its sinified tribesmen became major landowners. The Wei emperor undertook substantial agrarian reform. Every able-bodied peasant was given a plot of land of minimum size and peasant associations became collectively responsible for tax payments. However, the Wei dynasty was overthrown in 535 after several unsuccessful attempts to conquer southern China. Sui Wen-ti, founder of the Sui dynasty, did reunite the country in 589 A.D.
The Sui dynasty lasted only 37 years. Its second emperor, Sui Yang-ti, undertook construction of the Grand Canal linking the Yellow and Yangtze rivers. His heavy demand for corvée labor led to peasant revolts and civil war during which the emperor was assassinated by his bodyguard. Then Li Yüan and his son established the T’ang dynasty (618-906 A.D.), arguably China’s most glorious. The T’ang emperors, like the Han, continued the Sui program but at a more moderate pace. Their capital city, Changan, near Xi’an became a culturally vibrant metropolis of 800,000 persons with a cosmopolitan flavor. T’ang poetry, calligraphy, and sculpture achieved their classic expression. Commercial activities flourished. Silk weaving, porcelain manufacture, shipbuilding, and papermaking were brought to a high level of art. Korean and Japanese intellectuals flocked to the T’ang capital and picked up such cultural elements as chop sticks and kimono dress. The Japanese even built a replica of Changan at Nara in 710 for their own first capital.
In 626, Li Yüan’s middle son, later known as T’ai-tsung, murdered his two brothers and deposed his father to assume the imperial throne. However, this filial usurper was an able and intelligent ruler until his death in 649. An ambitious young woman named Wu, who had belonged to T’ai-tsung’s harem, managed to become the new emperor’s concubine a year later, and then empress five years later. When this emperor died in 683, Empress Wu put her grown son on the throne, then demoted him and put another son on the throne, and finally assumed the throne herself. She was overthrown in 705. The T’ang dynasty reached a cultural peak during the reign of her grandson, Hsüan-tsing, who ruled from 713 to 755. However, its military and political fortunes went into a decline. In 751, Arab armies defeated the Chinese near Samarkand. An Lu-shan, military governor of a northern province, launched a rebellion against the central government in 755 which, lasting nine years, devastated the Chinese population. Though weakened, the T’ang lasted another century and a half. A reform of land taxation in 780 stabilized government finances. A revived cadre of Confucian scholars allowed the Chinese nation to survive the brief period of anarchy. Confucian and Taoist partisans attacked Buddhism and other foreign religions.
Heavy taxation and homelessness sparked peasant revolts in the late 9th century A.D. The T’ang dynasty expired in 907 A.D. when a warlord named Zhu Wen entered Changan and forced the emperor to abdicate. During the period of Five Dynasties, continual war devastated society. The next dynasty, the Sung, arrived a half century later. Chao K’uangyin, commander of the imperial guards under the later Chou, mutinied and declared himself emperor. Now threatened by Khitan and Tangut barbarians in the northwest, the Chinese empire made peace by paying tribute. The central government consolidated the regional military commands to prevent future rebellions. An energetic and courageous administrator, Wang An-shih (1021-86), instituted several reforms. He revamped the imperial examinations, provided low-interest loans to peasants, abolished the system of corvée labor, reformed the tax on land, and brought back the peasant militia. The Sung period continued the cultural brilliance associated with the T’ang. When Jürchen barbarians conquered the Sung capital of Kaifeng in 1126, the empire lost all its territory north of the Yangtze river. The Sung dynasty continued in south China until Mongol armies under Kublai Khan conquered its remaining territory in 1273-79 A.D.
The Mongols were the first barbarian tribe to conquer China in its entirety. Kublai Khan moved his capital from Qaraqorum in Mongolia to Peking in 1260-67. Mongol armies overpowered the southern Sung empire, taking its capital in 1276. Gunpowder was used in its defense. The Mongols’ Yüan dynasty, lasting from 1260 until 1368 A.D., was perhaps the least representative of all Chinese dynasties. Its ruling class remained aloof from the Chinese population. This nomadic people despised the sedentary Chinese and never accepted their culture. During the conquest, they had ruined the agricultural infrastructure of northern China causing mass starvation. Yüan emperors employed foreigners rather than Confucian scholars in top administrative positions. They cordially received diplomats from Moslem countries and the West. The Yüan dynasty was, of course, unusual because its power extended well beyond the frontiers of China. The Mongol territories extended from Manchuria and north Vietnam to lands adjoining Syria and Hungary. Even so, naval expeditions against Japan and Java failed. Local revolts spread through China in the 1340s. The winner among the competing warlords was Chu Yüan-Chang, founder of the Ming dynasty. By 1382, he had evicted the Mongols from China.
The Ming dynasty (1368-1644 A.D.) revived the earlier pattern of Chinese society. Examinations based on the Confucian classics again became the route to top positions in the imperial government. Fearing another Mongol-style invasion, Ming emperors kept a close eye on nomadic tribes in the northwest. Emperor Yung-lo (ruled 1403-24) conducted five military campaigns against them. Briefly, a Mongol leader besieged Peking but could not penetrate its walls. In 1414, Ming armies reconquered Annam (Vietnam) but this nation became independent fourteen years later. Korea and Tibet remained Chinese tributaries. The Portuguese and Dutch established trading posts in southern China. European missionaries and scholars were received at the imperial court. Emperor Yung-lo commissioned a massive encyclopedia of Chinese culture to be written, which filled 11,000 volumes. He also sent a large fleet of sailing ships to ports throughout the Indian Ocean on seven separate expeditions between 1405 and 1433 A.D. Later emperors turned reclusive and xenophobic. Wan-li (1573-1620) retreated to interior parts of the Forbidden City, effectively leaving the eunuch administrators in control. The last Ming emperor committed suicide in 1644 as Manchu forces attacked Peking.
The last imperial dynasty, the Manchu or Ch’ing, brought to power a sinized group of hunters from Manchuria belonging to the Jürched people. At the beginning of the 17th century, a Jürchen chief, Nurhachi, had united a previously divided group of tribes and conquered much of Manchuria with a tightly organized army. Nurhachi proclaimed himself emperor of the Later Chin dynasty in 1616. Rebellions broke out in Ming China. In 1644, a Ming general enlisted help from the Manchurians to quell the rebellion. Pouring into northern China, they quickly occupied Peking and made it their capital. Between 1675 and 1683, Manchu armies subdued the remaining Ming forces which had retreated to the south. While the new dynasty continued the Chinese form of government, the Jürched people held themselves apart from the Chinese as a ruling class. Two emperors, K’ang-hsi (1661-1722) and Ch’ien-lung (1736-1796), both able military and political leaders, dominated this period. Under the Manchu regime, the Chinese government resisted territorial encroachment from Czarist Russia, conquered Taiwan, and discouraged western influence. However, the Europeans obtained commercial concessions from China. By the end of the 19th century, they had reduced the empire to political impotence. The last Chinese emperor, Henry Pu Yi, who reigned between 1908 and 1912, died under communist rule in 1974.
East and Southeast Asia
The nations strung out along the perimeter of east and southeast Asia are cultural satellites of the two central powers, India and China. Indian culture spread peacefully to neighboring areas through trade, immigration, and religion. Its sphere of influence spread to southeast Asia with emigration following the Huns’ destruction of the Gupta empire in the 5th century A.D. Many fled from the Pallava kingdom taking with them the Grantha script. The spread of Chinese culture was more a projection of China’s political power. The lands under its influence were taken by military conquest or the lure of Chinese civilization. China’s satellites include Korea, Japan, and North Vietnam. India’s include Ceylon, Burma, Thailand, Malaya, Cambodia, and Indonesia. There is an overlay of Islamic culture in Malaya and the southeast islands. Tibet, positioned between the two powers, has been politically annexed to China though it shows the influence of Indian religion.
An ancient Hindu kingdom was founded in Champa (south Vietnam) by Indian adventurers in 192 A.D. The Han emperor, Wu-ti, had annexed Nam-Viet (north Vietnam) in the 1st century B.C. Champa was an independent kingdom until the 12th century when it became a vassal of the Khmer empire. This kingdom was conquered in 1471 by the Annamese empire to the north. The northern Vietnamese people were Mahayana Buddhists under Chinese influence. They had been a part of China until the fall of the T’ang dynasty. Reconquered by the Yüan and Ming emperors, they regained their independence in 1428. The Khmer empire of Cambodia rose to power in the period between the 9th and 12th centuries under a dynasty of god-kings. Its chief monument is the temple complex at Angkor Wat constructed by Suryavarman II (1113-1150). The T’ai kingdom on its eastern border destroyed the Khmer state between 1350 and 1360. These T’ais were descendants of a people from Yunnan in western China who had migrated southward and formed a tightly organized state at Ayut’ia in 1350. They conquered Cambodia, lower Burma, and much of the Malay peninsula. East of Thailand, Burmese tribesmen migrating from the northwest overcame the native Mon people of Burma and established the empire of Pagan in 1044. The Mongols destroyed it in 1287.
Unlike the agriculturally based societies of southeast Asia, the peoples who inhabited the Indonesian islands made a living primarily from trade. Sea vessels traveling between India and China had to pass through the Straits of Malacca or Sunda, located at opposite ends of Sumatra. The Sumatran empire of Srivijaya prospered by intercepting and taxing ships in its territorial waters. It was the strongest power in the region between the 7th and 9th centuries A.D. The Indian Cholas and the Javanese were important rivals. Kings of the Shailendra dynasty ruled Java until the 8th century. They have left a Buddhist monument carved in a hill at Borobudar. These monarchs were replaced by the Hindu Sanjaya dynasty. The Singosari kingdom of east Java, which arose next in the 9th century, ruled over an expanded part of the Indonesian archipelago until the late 13th century. The Mongols attacked Java in 1293 during an internal rebellion. The late king’s son-in-law, Vijaya, welcomed their help in defeating the rebels and then treacherously turned on them. After the Mongols were defeated, Vijaya founded the Majapahit empire, which dominated a broad area in the 14th century. In 1403, a Shailendra prince named Paramesvara married a Majapahit princess and founded the city of Malacca. After he converted to Islam, Malacca became a base for propagating this religion.
The Chinese emperor Han Wu-ti first established military outposts on the Korean peninsula in 109-08 B.C. They were destroyed after the East Han empire fell in the 3rd century A.D. However, the northern Korean state of Koguryo adopted Mahayana Buddhism and Chinese-style public administration around 372 A.D. Numerous Koreans claiming Chinese ancestry emigrated to Japan in the 5th and 6th centuries. In the 7th century, T’ang emperors conquered the states of Koguryo and Paekche with the aid of Silla kings. Silla then expelled the Chinese. The unification of Korea under local rule did not prevent Chinese culture from continuing to gain influence there. Mahayana Buddhism and the Chinese script both took root during that period. The Silla kingdom was overthrown at the end of the 9th century. The Koryo dynasty, which suppressed Buddhism in favor of Confucianism, then ruled Korea until the Mongols arrived in 1231. Finally, the Yi dynasty came to power in 1392. This regime lasted until 1910. A vassal of Manchu China, the “hermit” kingdom of Korea existed in nearly total isolation from the rest of the world.
Japan became sinified during the period of the T’ang dynasty. Its imperial government, headquartered first at Nara and then at Kyoto, copied the Chinese model. However, Japanese society did not have enough educated persons to staff a central government effectively, so the power devolved to provincial governors. Also, the powerful influence of the Fujiwara family and of Buddhist priests encroached upon the emperor’s authority. Provincial gentry opposed to the Fujiwaras set up their own feudal governments around the country. After prolonged civil war, the Minamoto family defeated their rivals. Their leader, Yoritomo Minamoto, established a military dictatorship at Kamakura known as the shogunate in 1185. He did not seek to become emperor but the leader of a parallel government which exercised the real power. This new dynasty of warriors reformed the courts and restored peace to society. It presided over a cultural blossoming and repelled the Mongol naval attack against Japan in 1274 and 1281. Emperor Go-Daigo attempted a coup d’état in 1331. The shogunate was then transferred to the Ashikaga family in Kyoto. Their rule broke down after two centuries. Civil war took place in the streets of Kyoto.
Beginning in the middle of the 16th century, three warlords restored the shogunate and brought peace to the country. The first, Oda Nobunaga, seized power after winning a battle with another warlord who had marched on Kyoto. After ruling for twenty years, he was assassinated in 1582. One of Nobunaga’s lieutenants, Toyotomi Hideyoshi, sought to avenge this death. After becoming shogun in 1590, he invaded Korea intending later to attack the Ming dynasty. The invasion was repelled. Hideyoshi used shrewd strategies to foil potential opponents. He ordered all non-samurai to turn in their swords to make a gigantic metal statue of Buddha while also using Christian missionaries to fight Buddhist soldiers. Hideyoshi wanted his son to succeed him, but, after his death in 1598, an associate, Tokugawa Ieyasu, gained the upper hand. As shogun, Ieyasu moved the capital to Tokyo. He kept the other samurai in check by forcing them to maintain two residences and so incur a great expense. Finally, Ieyasu expelled the Portuguese missionaries from Japan. This arrangement preserved the peace for 250 years while Japan was closed to the outside world. Then, in July 1853, the American admiral Matthew Perry brought a fleet of gunboats into Tokyo bay, forcing the shogunate to reopen this nation. The imperial dynasty was restored in 1868.
Pre-Columbian America
When Hernando Cortés conquered the Aztec empire of Mexico in 1519-21 A.D., he arrested the growth of a militaristic state in an expansive phase. The main civilizations of the Americas were concentrated in Mexico and Central America, on one hand, and along the Pacific coast of South America, on the other. The Olmec and Chavín societies unified those two regions in the first half of the 1st millennium B.C. The first great empire of the Americas, the Mayan, flourished in Guatemala and the Yucatán peninsula of Mexico beginning at the time of Christ or, perhaps, in the next three centuries. In its classical period, this culturally rich society lasted about four hundred years. Its capital at Teotihuacán was the largest Meso-American city existing before the Spanish conquest. The Mayan people were distinguished by their mathematical and astronomical knowledge and their art. Teotihuacán was violently destroyed around 600 A.D. The Mayan culture continued in outlying jungle areas even after its ceremonial centers were abandoned. In South America, two cities - Huari in Ecuador and Tiahuanaco in Bolivia - began forming their own empires around 600 A.D. Between them, they controlled two thousand miles of coastal territories from Ecuador to northern Chile. These empires lasted about two centuries.
The classical Mayan civilization of Meso-America fell around 900 A.D. The next significant society in that area was the Zapotec society, located in the Oaxaca province of southern Mexico. The Toltec people gained political ascendancy in the Valley of Mexico around 900 A.D. Their capital city was Tula, just north of Mexico City. Toltec in the Aztec language means “skilled worker”, suggesting architectural prowess. Many ruins of temples, palaces, and pyramids adorned the Toltec capital. The founder of Tula, Topíltzin, was expelled by political opponents. He fled towards the eastern coast. Legend had it that this exiled Toltec king would return some day from the sea as the feathered serpent god Quetzalcoatl. In fact, a conqueror by the same name in the Mayan language founded a small empire on the northwest coast of the Yucatán peninsula in 987, which lasted until 1224. The Aztecs migrated from the desert of northern Mexico in the late 12th century. Around 1325, they settled on the west edge of Lake Texcoco, where, for defensive reasons, they created a Venice-like city on piles in the middle of the lake. This became Tenochtitlán, or Mexico City. In South America, a number of large cities including Chanchán and Cuizmanco exercised political power in the “urbanizing age” between 1000 and 1430 A.D.
The Aztecs took the first step in building an empire around 1430 when their leader, Itzcoatl, formed a military alliance with two neighboring city-states. During the next ninety years, the Aztec confederation conquered thirty city states. The purpose of these wars was to loot, exact tribute, and gather captives for religious ceremonies involving human sacrifice, not to create a politically integrated society. For, the Aztecs believed that the gods needed to be liberally fed with human hearts to maintain the universe. By 1519, this military machine controlled a territory extending from the Atlantic to Pacific oceans between southern and central Mexico.
The Incas of Peru had begun to build their empire around 1438 when the ruler of Cuzco repelled an attack from the Chanchas. His son, Pachacuti, then set about to conquer the Chanchán territory as well as that of other Andean peoples. A hundred years later, the Inca empire between the Andes mountains and Pacific ocean was so large that a second capital, Quito, had been added to administer the northern part. A civil war was in progress between royal brothers in these two capitals when Francisco Pizarro arrived in 1532, which the Spaniard shrewdly exploited. Likewise, Cortés’ war of conquest in Mexico was materially aided by provincial peoples who hated the Aztecs.
A SHORT HISTORY OF
CIVILIZATION II
A Change in Religion
The second civilization, introduced by alphabetic writing, began with a change in the nature of religious worship which took place in the 1st millennium B.C. Primitive religions, one may recall, typically include rituals intended to increase the fertility of agriculture. They acknowledge and feed a community’s ancestral spirits. They may involve animal or even human sacrifice as a means of pleasing the gods. These religions are polytheistic, reflecting the diverse elements of nature. They are instituted in cults of particular gods or goddesses which function under the supervision of hereditary priests possessing the knowledge to perform the rituals correctly. These priests also exercise political power. Later, the nature gods become associated with the collective identity of tribes, city-states, and kingdoms. The gods and goddesses become patrons of particular peoples. Their totemic characters are adapted to express these people’s communal identity. The different deities are arranged in hierarchies mirroring tribal or national relationships within a political empire. The emperors are considered to be divine figures or be uniquely endowed with divine authority and power.
All this changed with the wave of philosophical thinking that swept through societies of the Old World during the lst millennium B.C. Hereditary priesthoods gave way to a more democratic and meritocratic method of selecting religious leaders. Sacrificial rituals mattered less than maintaining ethical conduct. An open-ended brotherhood of believers replaced stratified castes. Ideas began to play a dominant role in religion. Divine spirit, once confined to particular places or persons, became a universal presence. And so it was possible for anyone who consented or believed to adopt the religion, regardless of nation. Like law, the principles underlying the religion could be applied anywhere. These principles could be expressed in creeds. Learned doctors could ponder and dispute the finer points of God’s truth. Those who bucked the general consensus of belief could be branded heretics. The inner attitude or direction of heart would become the criterion of correct religion, not expertise in performing a ritual. In the West, correct religion also involved worship of the right God, who was right because he was the only real God. Religious worship changed with the concept of monotheism.
The Monotheism of Ikhnaton and Moses
It may be that the first “prophet” of this new religion was the Egyptian Pharaoh Ikhnaton, who reigned between 1367 and 1350 B.C. He was first major historical figure to advance a program of monotheistic religion. A century before Moses, Ikhnaton proclaimed that the religion of Amun-Re, his ancestral religion, was false and there was only one God, Aton, god of the sun, who ruled over all the earth. Aton gave life to all living creatures. Ikhnaton wrote poems of praise to Aton but forbade visual images to be made. He moved the capital north from Thebes to Akhetaton (“City of the Horizon of Aton”) and ordered monuments to be defaced in which the name of Amun was inscribed. While antagonizing Amun-Re’s powerful priests, Ikhnaton also neglected affairs of state. The Hittites invaded Egypt’s Asian dependencies and tribute stopped. The imperial treasury became empty. When Ikhnaton died, the priests of Amun-Re regained control and the old religion was restored by his successor Tutankhamen.
Moses, who lived in Egypt in the 13th century B.C., was Pharaoh’s adopted son. He would likely have been aware of Ikhnaton’s religious crusade. Whether or not Jewish monotheism derives from that source, Moses firmly embraced the concept of One God. The First Commandment states: “I am the Lord your God who brought you out of Egypt ... You shall have no other god to set against me.” None of the Ten Commandments had to do with performing rituals. All were concerned with right conduct and belief. Moses transformed the Hebrew tribe from being one of a common nomadic type to being a nation which lived in accordance with God’s law. He forced this society to conform to a particular set of ideals. Though not explicitly philosophical, his instructions delivered in God’s name were ethical precepts like those of philosophers. Moses railed against the Hebrews for fashioning a golden calf as an object of worship. His God, Jehovah or Yahweh, was an invisible or spiritual being rather than a “graven image”. It required a certain intellectual discipline to worship a god whom one could not see and whose existence, from a common perspective, might therefore be in doubt.
The God of the Hebrews, known to Abraham, Isaac, and Jacob, was identified as the God who had delivered his people from captivity in Egypt. This God had demonstrated earthly power in overcoming the will of Pharaoh. Powerful rulers like Pharaoh were thus coming under the yoke of a new kind of God, believed to be universal and omnipotent. A threat to monotheistic religion was the marriage of Hebrew kings to foreign women who brought other gods into the royal household. After Solomon’s death, the Hebrews took to worshiping Canaanite fertility gods such as Baal and Anath to seek increased agricultural productivity. A religious faction arose, led by the prophets Elijah and Elisha, which claimed that Yahweh alone should be worshiped. For the Hebrews to worship other gods was like being unfaithful in a marriage. A rebellion broke out in the northern part of Israel in 840 B.C. against the infidelity of the royal household. It spread to the priesthood of the Temple at Jerusalem. However, the Yahweh-alone party was unable to impose its views upon the nation. A group of religious writers, including Amos and Hosea, began to interpret God’s will in light of current events. A picture of God emerged as being jealous yet merciful and desirous of justice for the poor.
After Assyria conquered the northern kingdom of Israel in 722 B.C., the Yahweh cult served to rally nationalistic sentiments in the yet unconquered southern kingdom of Judaea. Around 630, an unknown person in Jerusalem wrote a new set of laws and divine instructions, building upon older traditions, which was “found” by the high priest in the Temple at Jerusalem a decade later and judged to be authentic by King Josiah. These writings form chapters in the book of Deuteronomy. They take an uncompromising stand against worshiping Gods other than Yahweh. The Deuteronomy texts, embraced by the Yahweh faction, strengthened legalistic tendencies within Judaism. A later crisis occurred with Jerusalem’s capture by the Babylonians in 586 B.C. This event raised doubts that Yahweh was all-powerful and real if this God would let his own people become subjected to foreign empires. However, the Yahweh-alone party, through the prophets, argued that God had devised this painful experience to punish the Hebrews for their previous apostasy and teach them a moral lesson. After the lesson was learned, God would restore the nation of Israel to its previous glory. Then it would be seen that God had sent his people into captivity for the purpose of revealing himself to other nations. Yahweh would be revealed as God of Jews and Gentiles alike, a truly universal God.
In the meanwhile, because Deuteronomy restricted sacrificial rites to the Temple in Jerusalem, Jewish exiles living in Babylon were denied this means of practicing their religion in traditional ways. A type of nonsacrificial worship centered in such activities as praying, singing hymns of praise, and reading the Law was developed in its place. The core of Jewish religion lay in refusing to worship Gods other than Yahweh and in observing the purity laws. The Yahweh faction produced a body of historical writings to support its interpretation of divine will. These, along with works of the prophets, were compiled in books of the Old Testament. The final version was not completed until the end of the 5th century B.C.
Zoroastrian Influence
When the Persian emperor Cyrus II in 538 B.C. issued a decree allowing the Jewish exiles to return to Jerusalem and rebuild Solomon’s temple, it seemed to confirm the theory that Yahweh was a universal God. Cyrus, the most powerful monarch in the world, had been compelled to do this God’s bidding. In fact, the time spent in Babylon and Persia had been beneficial for Judaism as a religion. It had transformed the religion of a once provincial people into a religion with advanced cosmological features. This was largely the work of the Iranian prophet Zoroaster (628-551 B.C.). His teaching, Zoroastrianism, was the state religion of Persia. Because the Persian government treated Semitic peoples in a benign manner, the Hebrews were receptive to Persian cultural influences. So postexilic Judaism included many elements that can be traced to Zoroastrian teachings.
Zoroaster was an original thinker who lived in a society that was in transition between agricultural and nomadic ways of life. The industriousness, honesty, and trust implicit in the agricultural life were qualities which he identified with goodness. In contrast, the nomads who raided settled communities and stole their livestock were identified with evil. Conflict between good and evil was the central feature in Zoroaster’s religious philosophy. Ahura-Mazda, the supreme god, led the forces of good. Lower gods, daevas or whom one might call fallen angels, comprised the forces of evil led by Ahriman. The world was a battleground between these two camps. Animals such as dogs and oxen, which helped man, were good, while such creatures as snakes, scorpions, and toads were evil. Zoroaster taught that blood sacrifices should be abolished while such virtues as humility, cleanliness, and compassion should be cultivated in daily life. Yet, human beings were to show unrelenting hostility towards those persons, creatures, or beings aligned with the forces of evil. A battle was raging continually both within the cosmos and the human heart. In the end, good would triumph over evil to win an everlasting victory. Before then, evil would be seen gaining the upper hand. A redeemer figure would snatch the victory from Ahriman just when he appeared to be winning.
The Jewish prophetic writers who lived after the Babylonian exile wove Zoroastrian elements into their scenario of future events related to God’s restoration of the Hebrew nation. The idea of national restoration began to be replaced by that of a supernatural kingdom which God would establish on earth. As in the Zoroastrian scheme, the forces of good and evil first would do battle to control the world. There would be a period of tribulation in which the righteous would suffer greatly. Then, God would intervene at the last moment to ensure victory for the good. A captain of evil, Satan, would take part in these events. A Messiah, elevated from the ranks of humanity, would appear as God’s agent at the moment of victory. He would be delegated the task of judging human souls and either allowing or denying them entrance to God’s perfect kingdom. The idea that the souls of departed persons might be resurrected for the Last Judgment comes from the Zoroastrian cosmology. So do concepts relating to angels and the hierarchy of heavenly beings. The stark duality between evil and good, darkness and light, is, however, Zoroaster’s main contribution to religious thinking. God had created the material world to give it over to Satan, trap him in a finite structure, and then destroy him. Man’s duty was to assist in that process.
Jews under Foreign Rule
In the tolerant atmosphere of Persian society, Jewish intellectuals readily absorbed these religious ideas. Then, suddenly, Alexander the Great conquered the Persian empire. The ensuing Greek culture was alien to Semitic peoples. Adherents of the traditional Judaic religion were thrust back into a hostile environment. In 167 B.C., Emperor Antiochus Epiphanes IV, an ardent hellenizer, desecrated the Temple at Jerusalem. A priest named Mattathias, together with his five sons, launched a campaign of guerrilla warfare against the Seleucid empire. One of those sons, Judas Maccabaeus, led the rebel armies to a series of speedy victories against the Syrian Greek dynasty. He captured Jerusalem and restored Jewish worship in the Temple. The Maccabee family, as the Hasmonaean dynasty, ruled Judaea for about a century. At last the Jews had their own nation. Judaism became a missionary religion which forced male converts to become circumcised. However, the Hasmonaean rulers in ruling their worldly empire also became more hellenized. In 63 B.C. the Roman general Pompey intervened in a civil war and captured Jerusalem. Rome then ruled Judaea through proconsuls while the Herodic dynasty, hellenized Jews allied with Rome, ruled the northern part of Palestine, including Galilee.
As first the Greek Seleucid and then Roman power asserted itself in Judaea, themes of national redemption enunciated at the time of the Exile took on new urgency. Messianic fervor ran high in hope that the House of David might be restored. The prophetic writings, anticipating the end of the world order, continued in a more intense and fantastic form. A tension existed between this spiritualized religion and Jewish political militancy. Jewish society in the 1st century B.C. was split into several factions, based upon their attitude toward foreign occupation. The Pharisees were extreme anti-Hellenists. Known as the “Party of the Righteous”, they had endured much persecution in their attempt to keep Jewish religion free of foreign influence. The Sadducees were upper-class Jews belonging to the Temple establishment who did not accept religious innovations such as belief in the Messiah. A political faction known as Zealots favored armed resistance. The Zealots did mount a guerrilla offensive against Rome, but it was brutally crushed by Titus’ armies in 70 A.D. The last of this faction died in a mass suicide at the Masada fortress. Jerusalem was utterly destroyed. Sixty years later, another group challenged Roman rule following Simon Bar Kokba, believed to be the Messiah. It, too, was defeated.
In the debacle of 66-70 A.D., more than a million Jews may have died of starvation and other causes. Another one hundred thousand were taken to Rome as slaves. The leader of Jerusalem’s Pharisees, Johanan ben Zakkai was smuggled out of the city in a coffin. He later received permission from Emperor Vespasian to settle in Jamnia and establish an academy of Jewish studies there. Now that the temples in Jerusalem and Egypt had been destroyed or closed, this institution became the center of Jewish religious authority. There Judaism was reorganized around worship in synagogues. Its practice focused upon study of the Torah and observance of laws and rituals. The canon of sacred literature was determined. After Jewish uprisings in Cyprus, Egypt, and Palestine during the first half of the 2nd century A.D., the Roman government considered banning Judaism. Instead, a commission investigated Jewish law and suggested changes. Rabbi Judah the Prince published a code of laws, known as the Mishnah, which spread through the Graeco-Roman world. The Palestinian patriarch Hillel II published procedures for regulating the Jewish calendar in 359. After Christianity became the Roman religion, Jews experienced a period of increasing hostility. Theodosius II abolished the Jewish patriarchate in 425. The East Roman emperor Justinian proscribed rabbinic law and exegesis.
Conditions improved for the Jewish population in western Europe and Persia during the 8th century. The new Frankish and Arab rulers tolerated them as minority peoples within their large, heterogeneous empires. Christian kings often granted charters to their Jewish subjects guaranteeing their right to exist as a self-governing community in exchange for collection of special taxes. In the Ukraine, a Turkic dynasty established the Khazar empire with an army drawn from Iranian Moslems. Rejecting both Christianity and Islam, its rulers converted to Judaism in 750 and made this the state religion. The Khazar empire played an important role in commercial contacts between east and west until Prince Sviatoslav of Kiev conquered it in 970. Jews also thrived in the cosmopolitan culture that developed in Baghdad under the Abbasid dynasty. In the 10th century, the Moorish city of Cordoba became a similar cultural magnet for Jews. The Berber Almohade dynasty that swept across North Africa and Spain in the 12th century brought an end to this culture. Meanwhile, the Christian crusaders’ calls to rid Europe of “Christ-killers” gave vent to anti-Jewish campaigns, leading to the formation of ghettos. In once tolerant Spain, the Jewish population in 1492 was ordered to convert to Christianity or leave the country.
Early Christianity
Jesus, who was a rabbi, self-consciously assumed the role of Messiah that had been created in Jewish prophetic scripture. He began his religious career by submitting to baptism by John the Baptist, a ritual designed to remove sin and bring salvation in the Final Days. Jesus preached a simple message: “The Kingdom of God is at hand.” The apocalyptic scenario would unfold momentarily. In this scenario, the Messiah was a divinely appointed figure who would bring human history to an end and introduce God’s kingdom on earth. The three-year period of Jesus’ active ministry was devoted to preparing his followers for the Kingdom and fulfilling the scriptural conditions by which its arrival might take place. According to the Gospels, Jesus separated himself from the anti-Hellenic spirit of contemporary Jewish religion. He criticized the Pharisees, the most zealous anti-Hellenists, while he counseled cooperation with the Roman authorities in such matters as paying taxes. Railing against Jerusalem as a city notorious for killing prophets, Jesus himself broke specific religious laws. In some respects, his critique of Pharisaic legalism resembles Plato’s idealistic philosophy in its focus upon essential truths.
Yet, Jesus, a descendant of King David through Joseph, was a character positioned squarely within the Jewish religious tradition. His earthly role was defined by scriptural references to the Messiah, which were linked to expectations of the coming of God’s kingdom. Jesus was crucified before any such event took place. When, two days later, followers discovered that his dead body was missing from the tomb, this was taken as a sign that Jesus had been resurrected from death by God’s power and was therefore in a state like that of the supernatural Messiah. Heartened by the news of his resurrection, Jesus’ circle of disciples launched a spirited missionary movement to spread the good news. One not originally in this circle, the Apostle Paul, devised a new interpretation of Messianic events. Paul wrote that, in dying innocently upon the Cross, Jesus had atoned for the sins of others. His self-sacrifice would pay the price of admission to God’s kingdom for all believers, however sinful they might be. Yet, the early Christian community also awaited Jesus’ return to earth. The earlier Messianic expectations were transferred to Jesus’ Second Coming, when his glory and power would become visible. The book of Revelation, written by St. John the Divine near the end of the 1st century A.D., provided a mystical view, from a Christian perspective, of events in the Final Days.
Paul rationalized the failure of God’s kingdom to arrive promptly by suggesting that, starting with Jesus’ resurrection, the world was in the process of transformation from a temporal to spiritual state. As with the dawning of a new day, the change was not initially evident. Slowly the degree of spirituality would increase in the world and then, at some point, people would see plainly that God’s kingdom had come. Every once in awhile, as at Pentecost, one could see an outpouring of divine spirit, but mostly it was imprisoned within the material world. In language reminiscent of Plato, Paul urged Christians to fix their “eyes ... not on the things that are seen, but on the things that are unseen; for what is seen passes away; what is unseen is eternal.” He urged Christians to cultivate chastity so as to liberate themselves from bondage to the flesh. The inquiring spirit of the age also focused upon the person of Jesus. The Gospel of John begins with the idea of Logos, or God’s word. Jesus was believed to personify this word. In a philosophically intense society, Christians began then to question what kind of person, or God, Jesus was. Was Jesus a man with a physical body or was he a god, who was pure spirit? Or, perhaps, Jesus was both?
In places like Alexandria, with large Jewish and Greek populations, such questions were often on people’s minds. Diverse religions and systems of philosophy coexisted and freely mixed to form new theological hybrids. Philo, the Jewish Platonist, conceived of Logos as a mediating agent between the eternal and the temporal. Given the heavily philosophical disposition of this culture, it was likely that many arguments would take place concerning religion and many different conclusions would be reached, some of which would be considered heresies. The heretical position associated with Gnostic Christianity showed the influence of Neoplatonism. The Gnostics denied Jesus’ human nature and the historical record presented in the Bible. God only seemed to be involved in human affairs, and Jesus only seemed to be a man. Arian Christians, on the other hand, doubted Jesus’ divinity. Jesus the Son was subordinate to the Father, who was the one and only God. Marcon, an advocate of pure love, saw the Law of Moses as an evil influence. The Welsh heretic, Pelagius, believed sin was a result of misdirected free will. Montanus claimed to be the Paraclete or Spirit of Truth promised in John. Expecting the end of the world, the Montanists practiced speaking in tongues.
In 325 A.D., Constantine I convened the Council of Nicaea to resolve questions raised by the teachings of Arius. The Arian point of view, then dominant, was opposed by Athanasius, a church deacon from Alexandria. A key question was whether Jesus’ nature was “like” God’s, the Arian position, or “the same as” God’s. The Council decided to condemn Arius and his supporters and, instead, adopt the formulation of the Trinity. The Nicene Creed stated that Jesus was “the Son of God .. begotten not made, of one substance with the Father.” The Council of Ephesus, convened in 431 A.D., condemned the teachings of Nestorius, who opposed the designation of Mary as “Mother of God” and upheld Christ’s dual nature as man and god. In 451, the Council of Chalcedon condemned the Monophysite heresy which held that Christ had a single divine nature. Such questions were important for political as well as religious reasons. Several of the Germanic tribes whose kings had converted to Christianity embraced the Arian version of the faith. The Franks, on the other hand, won the Pope’s backing by supporting the orthodox version expressed in the Nicene creed. Elsewhere, Christians holding heretical views comprised important religious communities.
Nestorius, then Patriarch of Constantinople, called the wrath of the Christian community upon himself by attacking the idea that the Virgin Mary could give birth to a divine son. After the Council of Ephesus condemned his teaching, the Christian community at Antioch became deeply divided. Many of Nestorius’ followers emigrated to Iraq in the Sasanian empire where Nestorianism became the dominant faith of the Persian Christian church. Rebuffed in Europe, this doctrine became a missionary religion which spread to India, China, and Central Asia. According to Marco Polo, Nestorian chapels lined the trade routes between Baghdad and Peking.
Monophysite Christianity arose in reaction to Nestorianism. That faith was strong in Syria, Egypt, Armenia, and Abyssinia. Monophysitism is derived from the teachings of Eutyches. When Jacob Baradaeus became Bishop of Edessa in the mid 5th century, he organized the Jacobite church to serve Syrian Monophysites. The Coptic church was its counterpart in Egypt. The East Roman emperor declared the Council of Chalcedon invalid in 476, but later emperors vacillated. The excommunication and persecution of Monophysite Christians alienated members of this religious community from the Roman empire, paving the way for the Moslems’ quick and easy military victory in Syria and Egypt.
Development of the Western Church
The monastic life had its origin in the rejection of worldliness which some believed was infecting the Christian church after it became Rome’s state religion. It reflects the spirit of Neoplatonism and Gnostic Christianity with their dark ruminations concerning body and mind. Considering that Asoka had sent Buddhist missionaries to Egypt in the 3rd century B.C., the idea of monastic communities might also have come from India. St. Anthony, an Egyptian hermit, pioneered this type of Christian life. In 285 A.D., he withdrew to the desert wilderness to live in solitude where he was tempted by womanly apparitions, demons, and desires of the flesh, and attacked by wild beasts. His brave example attracted imitators, and a number of other hermits settled around him. After ignoring them for twenty years, he emerged from his solitude long enough to organize these people into a monastic community. The “anchorite” monks who followed St. Anthony were given to extravagant feats of self-deprivation. St. Simeon Stylites, for instance, sat for thirty-five years atop a stone pillar. Asceticism eventually gave way to religious communities which, isolated from the world, allowed individuals to grow in a holy state. In the 6th century, St. Benedict founded a monastery at Monte Cassino in Italy, which stressed a life of service to God. Irish monasteries were centers of evangelical advance.
By developing attractive models of Christian personality, these monks helped the church to win human hearts long after the age of Roman martyrdom had passed. Christianity was also advanced by church doctors and theologians, who, combating heresies, posed answers to tough moral questions. It was advanced by brave and able administrators such as St. Ambrose, Bishop of Milan, who refused communion to emperors when their policies went against the interests of the church. Such efforts succeeded in enlisting the power of the state to suppress rival religions. Pope Leo I was instrumental in establishing the Roman church as a power separate from the Byzantine empire and ecclesiastical authority separate from secular authority. After Europe was invaded by barbarian tribes, the Christian church represented the cultural legacy of the fallen empire. It persuaded the barbarians that only through baptism could they join civilized society. In the beginning, the church evangelized areas that had fallen within the boundaries of the Roman empire. Later, its missionaries went beyond those limits to extend God’s spiritual empire to heathen lands. St. Patrick converted Ireland to the Christian faith, and Irish missions were then sent to northern England. An English missionary, St. Boniface, who was martyred in Holland, established the first German see in the 8th century.
As the Hebrew prophets had once turned Jerusalem’s fall to spiritual advantage, so, when Rome fell, Christianity profited from the writings of St. Augustine. The greatest Christian theologian since Paul, Augustine had once been a Manichee and a Neoplatonist. His Confessions told of riotous living as a young man in Carthage. He had converted to Christianity through the influence of St. Ambrose and his mother, St. Monica. From his later theological writings came the orthodox teaching of salvation by grace and the doctrine of original sin. Augustine wrote The City of God during the barbarian devastations of Italy and North Africa explaining why, after Rome had abandoned pagan gods and embraced Christianity, this great city fell. In answer, Augustine drew a distinction between worldly cities such as Rome and the “City of God”, which could never be destroyed. This City of God was a spiritual community, created through divine love, which was eternally unchanged. It stood in contrast to earthly cities, built from selfish desires and pride, which inevitably would pass away. So, as Rome’s secular empire crumbled, humanity clung to that which was safe from corruption and decay.
Perhaps the church’s ablest administrator was Pope Gregory the Great, who is credited with rebuilding the Roman church in a dark hour. Born to nobility, Gregory instead chose the hard life of a monk and later ascended the ladder of ecclesiastical offices. As Pope, he strengthened church discipline, reorganized the properties of the church, sent missionaries far and wide, negotiated with the Lombard kings for Rome’s political independence, and kept in check the rival claims of Byzantine bishops. A notable accomplishment was his role in converting England to the Catholic faith. In 597 A.D., Gregory recruited a Benedictine monk named Augustine for a mission to the British isles. Augustine and a retinue of forty monks were received cordially by King Etherbert and given land at Canterbury to build a church. His timely arrival in Britain helped stop the spread of Irish Christian civilization which might have challenged Catholicism for leadership of Western Christianity. An agreement reached at the Synod of Whitby in 664 A.D. regarding the method of calculating the date of Easter and the shaving of monks’ heads tipped the scales decisively in favor of Rome.
Power of the Roman Church
Technically, the Pope was Bishop of Rome, leader of Christians in that city. He later assumed leadership of the entire church due to the apostolic origins of that position. The church at Jerusalem had initially assumed the leadership role. Jesus’ brother James was its leader. Rome replaced Jerusalem as the center of Christianity because the apostles Peter and Paul had moved to that city and been martyred there. The Roman church became a kind of spiritual government whose authority rested upon a continuous line of succession back to Peter, who was the first bishop of Rome. A famous passage in the Gospel of Matthew quotes Jesus: “You are Peter, the Rock; and on this rock I will build my church ... I will give you the keys of the Kingdom of Heaven; what you forbid on earth shall be forbidden in heaven, and what you allow on earth shall be allowed in heaven.” In medieval art, St. Peter was frequently shown with a set of keys in his hands, which were the keys to Heaven. In the Biblical quotation, Jesus was entrusting to Peter and, by implication, to Peter’s ecclesiastical successors the power to decide who would be permitted to enter Heaven.
When Christianity became the state religion of Rome, the Church received an additional boost to its authority. During the Dark Ages, the prestige of the fallen state passed to it as Rome’s legitimate heir. The Roman church was the remnant of a glorious empire that was no more. Popes used their prestige and authority in alliance with worldly rulers to create a dual system of governance. A universal church, whose spiritual jurisdiction covered the western half of the fallen empire, was paired with a multitude of secular states that were formed by the barbarian peoples involved in Rome’s collapse. The idea of reconstructing that empire was to become an enduring theme of European political history. The Frankish dynasty, supporters of the Roman church, acquired secular power in much of western Europe during the 8th century. It seemed that imperial rule might be revived when, in 800 A.D., Pope Leo III crowned Charlemagne emperor of the Holy Roman Empire. However, the secular government became divided again when Charlemagne died and, later, his three grandsons inherited the kingdom. The power in medieval society was shared by two institutions, church and state. The church looked after people’s spiritual needs and secular governments provided physical security.
While church and state worked cooperatively, there was also a power struggle. The head of the Roman church, the Pope, struggled to gain an advantage over secular governments by exercising its powers of recognition and, more forcefully, by excommunicating disobedient rulers. History records the contrite appearance of Emperor Henry IV before the pope after Gregory VII excommunicated him in 1076. If the Church wished to punish a king, it could deny the sacraments to the king and his subjects, thus denying them entrance to Heaven. Kings and emperors, on the other hand, fought the Church through use of their earthly power. A particular point of contention was the struggle between Popes and European monarchs over the right to “invest” (appoint) local church officials. The Concordat of Worms resolved this question in the Pope’s favor but kings were allowed to supervise church elections. The administration of justice was divided between ecclesiastical and secular courts, each having certain powers and scope of authority. Pope Boniface VIII called ecclesiastical and secular governments the “two swords” of the church. Symbolizing the dual power structure, coins of the period often exhibited the Pope’s likeness on one side and the Holy Roman Emperor’s on the other.
On a personal level, the Roman church exercised its authority through the sacraments. These were rituals conducted by priests which were thought necessary for salvation. Seven sacraments were believed most important: baptism, confirmation, the Eucharist, penance, extreme unction, orders, and matrimony. Church doctrine asserted that sacraments were the means by which God transmitted his grace to humanity. Grace meant undeserved forgiveness of sins. The institution of the sacraments was based on the principle that all men were sinners in need of forgiveness who were unable to obtain this by their own powers. The Eucharist, which was patterned after Jesus’ last supper with the Disciples, was the greatest of the sacraments. The early Christian community especially cherished this ceremonial meal because it was believed that Jesus would return during its celebration. In the 9th century, A.D., a Benedictine monk named Radbertus wrote a treatise arguing that the bread eaten during celebration of the Mass was the flesh of Jesus and the wine which was drunk was his blood. Another monk suggested that these two substances were only symbols of Christ’s body and blood. The literal interpretation, more in tune with the spirit of the medieval church, was accepted at the Fourth Lateran Council in 1215.
The Church claimed competent authority to decide theological questions through its firm connection to Jesus and the Apostles. The historical record of God’s word presented in the Bible became a criterion of truth. At the same time, the Roman church put much emphasis upon traditional church teachings. Such doctrines, being inspired by the Holy Spirit, were considered to have equal authority with the sacred scriptures. “The Church has never erred and will never err to all eternity,” a papal declaration of the 11th century maintained. An earlier declaration held that “the Popes, like Jesus, are conceived by their mothers through the overshadowing of the Holy Spirit ... All powers in Heaven, as well as on earth, are given to them.” With such attitudes, it is not surprising that leaders of the Roman church instituted the Inquisition and burned heretics at the stake. Disbelief, in the form of rational inquiry, began to creep into the culture despite the Church’s best efforts to enforce its monopoly of faith. Given the importance of papal links to St. Peter, the Great Schism of 1378-1417, in which two rival popes each claimed authority, produced a severe crisis of confidence in the Papacy.
Perhaps the best evidence that the Roman church was becoming a worldly empire lay in its advocacy and use of military force. The church itself controlled certain territories in northern and central Italy. In 756, Pippin III gave the Pope temporal control of certain lands conquered from the Lombards as a reward for his support in recognizing Carolingian claims to the Frankish throne. The Papal States were drawn into a long struggle with Holy Roman Emperors and local powers to control this and other territories. However, the Church was also responsible for launching and maintaining the Crusades which were directed against the Islamic rulers of Palestine between the 11th and 13th centuries. Responding to complaints from Peter the Hermit and others that the Turks were harassing Christian pilgrims to Jerusalem, Pope Urban II issued an appeal in 1095 for European Christians to recapture the Holy City from the Moslems. A huge army led by Godfrey of Bouillon was assembled in Constantinople to carry out this mission. “Deus volt” - God wills it - was their battle cry. The Christian crusaders did capture Jerusalem in 1099 after a battle in which 70,000 civilians were massacred, and set a French king upon the throne in that city. The First Crusade was followed by eight others, progressively less successful. In the end, Moslems retained control of that region.
Orthodox Christianity
The East Roman empire, which had survived the barbarian onslaught of the 5th and 6th centuries A.D., was continuously associated with the See of Constantinople. A church council held in 381 A.D. had declared that it ranked second after the See of Rome. The church council convened in Chalcedon in 451 A.D. gave Constantinople spiritual authority over western Turkey and the eastern part of the Balkan peninsula. In this realm, political rulers tended to dominate the religious institutions, following the principle which Justinian laid down in the 6th century: “Nothing should happen in the Church against the command or will of the Emperor.” The church became like a department of government in charge of religious ceremonies. The Metropolitan of Constantinople could make no claim similar to the Roman Pontiff’s of having authority that ran back to the Apostles. He merely exercised geographical jurisdiction. His scope of authority followed the lines of imperial power. Consequently, the center of power in the eastern church drifted toward Moscow after Constantinople fell to the Turks in the 15th century.
Orthodox Christianity put less emphasis upon the authority and structure of the church, the sacraments, priestly celibacy, and other worldly aspects of religion than the western Church, and more on theological questions. The Eastern church did not accept the solution of Chalcedon regarding Christ’s being: one “in two natures .. without change, without division...” It did not accept the filioque clause in the Nicene creed: that the Holy Spirit had proceeded “from the Father and the Son.” Orthodox theology tended to stress a single nature, accepting the divinity of Christ at the price of neglecting his humanity. An issue peculiar to the Orthodox church was the controversy concerning iconoclasm. Visual representations of divine subjects, long tolerated in the Christian church, went against the grain of Judaic religion. Hoping to increase support among his Jewish and Moslem subjects, Emperor Leo III in 726 launched a personal crusade against the use of icons in the church. He demanded that the icons be destroyed and removed church officials who resisted. Leo’s iconoclastic program met with stiff resistance, especially in the monasteries.
John of Damascus argued that icons helped religious understanding. “When we set up an image of Christ in any place,” he wrote, “we appeal to the senses. An image is, after all, a reminder; it is to the illiterate what a book is to the literate; and what the word is to the hearing, the image is to the sight.” Leo remained unconvinced by such arguments. He continued with the idol-smashing campaign despite strong opposition and a growing rift with the western church. His son, Constantine V, was an even more ardent iconoclast. The Synod of Hiera in 753 A.D. formally supported the emperor’s position. Three decades later, Constantine’s grandson, Constantine VI, became emperor but was too young to rule so his mother, Irene, assumed power. When it became apparent that the young emperor too favored the iconoclastic program, Empress Irene took steps to block this. She convened a general council of the church to repeal previous decisions. To thwart opposition within her own family, she had the young emperor, her son, blinded and deposed. The use of religious images was again permitted. A revival of the iconoclastic campaign took place during the reign of Leo V. It was again stifled through the intervention of another icon-loving empress and regent, Theodora. Ultimately, a compromise was reached, banning three-dimensional images but tacitly permitting two-dimensional ones.
The furor concerning icons was one of several issues which drove a wedge between the eastern and western branches of Christianity. While the worldly power of the eastern patriarchs was limited by the Byzantine state, the head of the western church was becoming steadily more powerful. As the Roman pontiff claimed primacy within the church on the basis of his succession from Peter, so the Metropolitan of Constantinople claimed authority based on his relationship to the surviving Roman state. In that regard, the Pope’s coronation of Charlemagne as emperor of the Holy Roman Empire posed a direct challenge to the claims of the Byzantine Empire and its captive church. The issue of Photianism, an eastern declaration of independence from Rome, became the immediate cause of a rupture between the two branches of the church. Theologically, they were divided by the fact that the eastern church did not accept the “filioque clause”. The Great Schism officially took place in July 1054 A.D. when Pope Leo IX excommunicated Michael Cerularius, an eastern patriarch. After Frankish crusaders sacked Constantinople in 1204 A.D., reconciliation between the two domains of Christendom became all but impossible. The Byzantine emperor did accept Rome’s spiritual authority in the 15th century, but it was too late the save the empire from conquest by the Ottoman Turks.
The salvation of the Byzantine church was its outreach to Slavic peoples. In the 9th century, the Patriarch of Constantinople sent a pair of scholarly brothers from Thessalonica, Constantine and Methodius, on a mission to neighboring peoples. They went first to Khazaria, but its rulers decided instead to convert to Judaism. Next the brothers received an invitation to the Slavic principality of Great Moravia (Czechoslovakia and Hungary). Constantine, also known as Cyril, brought with him the Glagolitic alphabet which he had invented for Slavs living in Greece. The brothers adapted this script to the local idiom and established a mission. Although they were driven out of Moravia through pressure from Frankish German priests, some of the remaining Orthodox clergy made their way to Bulgaria, carrying the Glagolitic script. Bulgaria had converted to the Eastern Orthodox faith in 863. Its ruler, Khan Boris-Michael, received the Moravian refugee clergy with the idea that their Slavonic-language script would enable Bulgaria to develop its own national church and remain politically independent of Constantinople or Rome. In 885, Bulgarians simplified the Glagolitic script, naming it “Cyrillic” after Cyril. It was this script primarily which brought Slavic peoples such as the Russians into the Orthodox fold.
Bulgarian peasants reacted to their nation’s adoption of Orthodox Christianity by embracing a religious creed known as Bogomilism, which an Orthodox priest, Bogomil, had devised between 927 and 954. This was an anticlerical doctrine adapted from Paulician Christianity, a Thracian heresy. Bogomilism held that the world had been created by Satan, who was God’s older son, and that Jesus, God’s younger son, was sent to earth to overthrow Satan and rescue mankind. Another version put good and evil on a parity. While rejecting Christianity, the Bogomils practiced celibacy and asceticism perhaps to distinguish themselves from the loose habits of the Orthodox clergy. Bogomil missionaries spread this religion to other parts of the Balkan peninsula, especially Bosnia, where the ruling family embraced it as an alternative to the Hungarian Catholic and Serbian Orthodox faiths. The French Albigenses belonged to the same movement. The Bogomil heresy was fiercely suppressed and died out with the expansion of Islam in the Balkan region.
Eastern Europe was a battleground between the Roman Catholic and Greek Orthodox faiths at the close of the 1st millennium A.D. Poland and Bohemia broke with Slavic peoples elsewhere in affiliating with the Roman church. To forestall Teutonic encroachment upon Polish territory, Duke Mieszko I (960-992) placed his realm under the Pope’s direct protection and control. The Polish Piast dynasty subsequently conquered territories as far east as Kiev and blocked the Teutonic German advance along the Baltic sea. Russia’s conversion to Eastern Orthodox Christianity coincides with the baptism of Prince Vladimir of Kiev in 987. The prince made his selection of religions from among several competing types after receiving the hand of Emperor Basil II’s sister, Anna, in marriage. Vladimir ordered his subjects to be baptized en masse. Missionaries from Bulgaria brought the Old Church Slavonic liturgy and the Cyrillic alphabet to Kiev. The Mongols conquered the Ukraine in the 13th century and held it for over two centuries. When Mongol power subsided, the Dukes of Moscow began annexing territories in what later became the Russian state. After Ivan III married the last Byzantine emperor’s niece and took the title of “Czar”, Moscow became the new center of the Orthodox faith. The Patriarch of Constantinople was given civil authority over Christians living in the Ottoman empire.
The Later Persian Religions
The religion of Zoroaster supported the first (Achaemenian) Persian dynasty. Like most other philosophical religions, Zoroastrianism had to be softened by personal features to make it suitable for devotional worship. Although Zoroaster was a monotheist, a later version of his religion turned the separate aspects of Ahura-Mazda into goddesses. The Magi were hereditary priests of this religion. Christians know them as the three wise men who, following the Star of Bethlehem, brought gifts to the infant Jesus. The Arsacid dynasty which ruled the Parthian empire for more than four centuries personally embraced Magian Zoroastrianism but was tolerant of other religions. The Sasanid family, which supplanted the Arsacids in 221 A.D., were priests of the pre-Zoroastrian water goddess, Anahita, whose cult had been incorporated into the Zoroastrian religion. Its rulers were therefore more zealous in promoting that religion.
In 240 A.D., a Persian prophet named Mani began preaching that he was a reincarnation of the Holy Spirit. A self-styled successor to Zoroaster, Buddha, and Jesus, he had received the final and most complete revelation of God. Emperor Shahpuhr I gave Mani permission to preach his new religion throughout the empire. Missionaries spread Manichaeism, as well, to Egypt, Central Asia, and the Roman empire. Like Zoroastrianism, its theology centered upon the opposition of evil and good, darkness and light. Man needed to be redeemed from his material nature through Christ’s divine light. After Shahpuhr died, priests of the Zoroastrian state religion persuaded Emperor Vahram I to arrest Mani and put him to death. Like Jesus, however, this prophet’s death and subsequent persecution of his followers had a stimulating effect upon the religion. In north Africa, the future St. Augustine was briefly a Manichee. The Manichaean faith became the national religion of the Uighur Turks living west of China. It also influenced the Paulician, Bogomil, and other Christian heresies.
Since Christianity was the Roman state religion after Theodosius I’s ban on pagan religions in 391 A.D., Sasanian emperors tended to view Christians living in Persia as a potential fifth column. Likewise, Roman emperors mistrusted Zoroastrians. In 297, Diocletian denounced Egyptian converts to Manichaeism as Persian sympathizers even though Persian emperors had put Mani to death and persecuted his followers. After the Council of Ephesus condemned Nestorian Christianity in 431, Roman Nestorians moved across the border to Nisibis in Persia where they were welcomed as refugees. Their persecution in Rome cleared them of suspicion. However, in 440, Emperor Yazdigerd II ordered all his subjects to convert to Zoroastrianism. This provoked a rebellion in Christian Armenia, which was crushed. The Persian military defeat by the Ephthalite Huns in 484 forced the Sasanian government to back down and tolerate non-Iranian Christians.
The same disaster produced a social crisis that was accompanied by a religious movement led by Mazdak, head of the Drist-Den Manichaean sect. It was a communist movement formed in response to the economic inequality of Persian society. Emperor Kavadh I became a convert and put through its program of reform. The Persian nobility and Zoroastrian clergy together opposed the Mazdakites. Ultimately, the emperor himself disavowed them at the urging of his son and heir, Khrusro I, who later crushed this movement. In 572, Khrusro I began a war with the East Roman empire, which lasted until 590. Another war between the Christian Roman and Zoroastrian Persian empires broke out in 604. It was not settled until 628. The Arabs attacked both empires simultaneously five years later. Exhausted from its Roman wars, the Persian empire was extinguished. The Sasanid capital of Ctesiphon fell in 637.
Most Zoroastrians in Persia readily accepted Moslem rule. A few fled to northwestern India where they were granted asylum on condition that they refrain from proselytizing. They became known as the Parsee sect, numbering today less than a million persons. Another group fled westward to China through a part of Turkestan which Khrusro I had annexed to the Persian empire. A Sasanid prince reached Ch’ang-an, the Chinese capital, as a refugee in 674. All three of Persia’s principal religions - Zoroastrianism, Manichaeism, and Nestorian Christianity - penetrated China from the west during the early T’ang period. Manichaeism, being the Uighur Turk’s national religion, made perhaps the greatest inroads. However, Kirghiz nomads defeated the Uighurs in 840. In 841-845, the Chinese government conducted a crackdown against foreign religions at the instigation of Taoist clergy. While the Buddhists suffered mainly economic losses, this campaign of xenophobic persecution was fatal to the Persian religions established in China.
The Religion of Islam
Religious and political conflict beset Arabian peoples in the beginning part of the 7th century, A.D. The last war between the East Roman and Sasanian Persian empires took place between 604 and 628, A.D. Arabs served as mercenary soldiers for both sides. In the process, they acquired valuable experience in making war and the latest military equipment. These Arabs were immersed in religious controversies as Christians, Jews, Zoroastrians, and Manichees struggled for dominance. A large number of Jews lived on the Arabian peninsula in cities such as Yathrib (Medina) and Khaybar. Yemen in the south was first a Christian kingdom and then a state controlled by the Persian empire. In the 3rd century, A.D., Mani had claimed to be a prophet in the lineage of Zoroaster, Buddha, and Jesus who put a “seal” on prophecy. Later the same idea was taken up with greater effect by Mohammed, founder of the Islamic religion.
Islam means self-surrender or submission to God. God, whose name is Allah, is the same as the Jewish or Christian God according to Moslem teachings. However, these earlier religions had become corrupted so that a new prophet had been commissioned to deliver a revelation that would set humanity straight. The prophet Mohammed lived in the city of Mecca which was situated on the trade route between Yemen and Syria in western Arabia. He conducted caravans between Mecca and Damascus for his wife, Khadijah, who was a wealthy widow. While in Syria and Palestine, Mohammed had been exposed to the Jewish and Christian religions. He became ashamed of the polytheistic religion of the Arabs which seemed primitive in comparison with them. In 611 A.D., at the age of forty, Mohammed had a vision in a cave near Mecca in which the Archangel Gabriel commanded him to transmit a new revelation of God to the people of Mecca. This was a message of monotheism confirming earlier Judaic teachings. Gabriel’s lengthy dictations to Mohammed were compiled in a collection of Arabic-language writings known as the Koran. Mohammed’s religion imposed strict personal disciplines such as a prohibition against drinking alcohol or eating pork and religious duties that included daily prayers, annual fasting, and pilgrimages to Mecca. It also forbade usury and abuse of the poor.
For twelve years, Mohammed tried to persuade fellow residents of Mecca to adopt this new religion, but his efforts met with limited success. Although a Quraysh tribesman, he was not part of the inner circle that controlled the city. Also, Islam’s monotheistic principles conflicted with the polytheistic cult of the Ka’bah, a large black stone whose annual festival was economically important to Mecca. Mohammed’s fortunes suddenly changed when, in 622 A.D., he received an invitation to head the government of Medina, a neighboring city torn by political rifts. Mohammed proved to be an able administrator. His theocratic government in Medina united the quarreling factions and grew militarily strong. Its armies waged aggressive war first against Mecca and then other Arabian cities. A factor aiding in their success may have been that Mohammed allowed his followers to attack caravans and plunder defeated enemies. The rich Jews of Medina, who refused to convert to Islam despite its acceptance of a single God, were a particular target. By the time of Mohammed’s death in 632 A.D., his Islamic empire controlled most of the Arabian peninsula.
After the prophet’s death, local Arabs revolted. The cities of Mecca and Medina, controlled by the newly converted Quraysh clan, opposed them as Islamic loyalists. Mohammed’s temporal successor or “caliph”, Abu Bakr, persuaded the other Arabs to end their revolt and join forces in conducting military raids against the East Roman and Sasanian Persian empires whose armies were exhausted from more than two decades of war. Their roads intact, the Moslem armies rapidly overran the domain of the Persian empire. They pushed the East Roman empire back into an area north of the Taurus mountains in Turkey. Syria, Palestine, Mesopotamia, and Egypt fell to the Moslems by 641 A.D. The Sasanian empire was extinguished by 651 A.D. During the next half century, Islamic forces took Armenia and Georgia. They conquered all the East Roman territories in northwest Africa and the Visigothic kingdoms in Spain and southwest France. In the East, they captured the Ephthalite Huns’ possessions in Uzbekistan and Transoxania (south and east of the Aral Sea), as well as lands adjoining the Indus River. However, the Moslems failed on two occasions to take Constantinople. Their push northward through France was checked at the battle of Tours in 732 A.D.
Contrary to an opinion expressed in Europe, this was not a campaign to force conversions to the Islamic faith. Membership in other religions was tolerated so long as their people submitted to the Islamic government and paid a surtax. These people had their own self-governing communities, civil codes, and religious leaders. Arab military commanders served as governors of the conquered territories. Lacking a literate corps of administrators, they wisely left civil administration in the hands of their hellenized Christian and Persian subjects. Nestorian, Monophysite, and other persecuted Christian sects generally welcomed the change in governments. Many voluntarily converted to Islam because it was financially advantageous to do so. The Arab conquerors wore their religion as a badge of national pride. The Umayyad dynasty, which Mu’awiyah founded at Damascus in 661, established Arab Moslems as a privileged class. Exempted from paying poll taxes, they also received regular payments from the state treasury. When Caliph Umar II (717-20) abolished the poll tax for non-Arab Moslems, it precipitated a financial crisis. Caliph Hisham’s subsequent substitution of a land tax upon non-Arabs to replenish the treasury caused much dissatisfaction. The Umayyad rulers were replaced by the Abbasid dynasty in the Arab Civil War of 747-750 A.D.
Theoretically, the Abbasid insurrection was about legitimacy of succession. Their claimant to the caliphate was descended from Ali, Mohammed’s son-in-law, whereas the Umayyad rulers traced their lineage back to a Qurayshite tribesman unrelated to the Prophet. After Abu Bakr’s death in 634, Umar was elected caliph. A wise and effective ruler, he was assassinated by a Persian slave in 644. The next caliph, Uthman, was less capable. He was assassinated in 656. Ali became the next caliph. Opposed by Aisha, Mohammed’s widow, and some of the Prophet’s companions, he was assassinated in 661. Ali’s eldest son, Hasan, was elected to succeed him. However, Mu’awiyah, governor of Syria, was recognized as caliph in Damascus. Mu’awiyah persuaded Hasan to give up the caliphate in exchange for a royal pension and a harem in Medina. This arrangement held until Mu’awiyah’s death in 680. Then Ali’s younger son, Husayn, set forth from Medina with a group of supporters to claim the caliph’s position. Mu’awiyah’s son and successor, Yazid, sent a small army which intercepted Husayn at Karbala. When Husayn refused to return to Medina, Yazid’s forces slaughtered him and his supporters. They brought the head of Husayn, Mohammed’s grandson, back to Yazid in Damascus.
This shocking event led to a schism within the Islamic community. The Shi’ite Moslems, predominant in Persia, regarded the Umayyad dynasty as usurpers of the caliphate. They supported the rival claims of Ali’s descendants on the basis of their blood line running back to Mohammed. For them, Husayn’s murder in 680 came to symbolize the abuse suffered by non-Arab minorities under Umayyad rule. Sunni Moslems, on the other hand, represented the Umayyad loyalists. They were the mainstream group during the period of Arab ascendancy. Abbas, a Shi’ite descendant of Mohammed’s uncle, became caliph after the upheaval of 747-50, founding the Abbasid dynasty. Abbas’ successor, Mansur, moved the capital of the empire from Damascus to Baghdad. There Persians gained political and cultural ascendancy. Meanwhile, an Umayyad refugee, Abd ar-Rahman, escaped to the Iberian peninsula where he founded a Sunni state. Now there were two caliphates - one Shi’ite and one Sunni - and the political unity of Islam was lost. The succession to the caliphate has thus become a greater source of controversy within Islam than questions of philosophical belief. Heresy plays a smaller part in the Islamic than Christian religion. Perhaps that is because, in contrast with Jesus, who preached about another world, Mohammed left specific instructions about many earthly things.
Islamic religion, like others in the Judaic tradition, includes belief in the Last Judgment and in Heaven and Hell. Persons who remain faithful to the religion, especially those who died fighting for it, will be accepted into paradise while infidels will spend an eternity in Hell. Islam attaches much importance to interpretations of law. The Koran, which includes many of Mohammed’s spiritual teachings and administrative rulings, is a principal source of this law. In addition, scholars have assembled collections of stories about Mohammed and sayings attributed to him. Mohammed once said: “My community will never agree in an error.” That statement has given sanction to legal interpretations not found in the Prophet’s teachings which have become accepted within Islamic society. This culture is tolerant of doctrinal differences. Within the Sunni tradition, there are four different schools of Islamic law which are considered equally valid. A community is free to pick whichever it prefers. Theological questions are decided by a consensus of learned opinion. The caliph is strictly a political authority. Ibn Taymiyya taught that any state governed in accordance with Islamic law belongs to Islam, whether or not it has a caliph.
The centuries which followed the founding of the Abbasid dynasty in 750 A.D. brought a flowering of Islamic culture. Baghdad in the 9th century A.D. was a cosmopolitan city, exciting both commercial and intellectual activity. While the Arabs had lost official privileges, their language acquired a rich literature as many poems were written in Arabic and works from other cultures were translated into it. New translations of Greek philosophical writings became available during this period. The Islamic religion developed a theology competitive with that of other religions. Mutazilite scholars debated such questions as predestination, free will, and justification by faith. The doctrine of a “created” Koran as embodiment of God’s word was analogous to Christ’s role in Arian Christianity. One type of religious thinking tended to be legalistic. A second represented the rationalism of theologians like the Mutazilites. A third, which stood in stark contrast with the other two, sought direct experience of God. Persian Shi’ites in the late 10th century formed a fraternity of Sufi mystics who practiced their religion through poetry, ecstatic chanting, and dance.
Islamic Empires
The Abbasid revolution of 750 A.D. ushered in a period of confusing political events. In 756, a refugee from the House of Umayyad established a new dynasty on the Iberian peninsula where Sunnis comprised a majority of the population. However, this regime was under intense pressure from Frankish Christians to give up territory. Three new Moslem states ruled by Shi’ite separatists were formed in Algeria between 757 and 786 A.D. Morocco became an independent state in 788 under the Alid (House of Ali) king Idris I. In 800, a Sunni state which recognized the suzerainty of the Abbasid dynasty was established in Tunisia by Aghlabid Arabs. Isma’ili (Seven-Imam) Shi’ites denying the Abbasid’s legitimacy overthrew this regime a century later. In Iran, where the Abbasid revolution had originated, several insurrections took place after the second caliph, Mansur, put to death in 754 the man who had instigated the rebellion against the Umayyad dynasty. Though fractured, Islam’s political empire was continuing to expand. In 751 A.D., Abbasid armies defeated Chinese forces in a battle at Samarkand. Umayyad Moslems evicted from Iberia captured Crete from the East Roman empire in 826 A.D. The Aghlabids from Tunisia conquered most of Sicily. Qarluq Turks, who later occupied the Tarim basin, were converted to the Sunni sect in 960.
The 10th and 11th centuries A.D. were times of tribulation for the Islamic world. Its rulers fought with the East Roman empire and later with western crusaders for possession of Sicily, Syria, and Palestine. Nomadic tribes including Turks, Arabs, and Berbers overran large areas of the empire. In 945 A.D., Buwayhid rulers of a Moslem state in western Iran overthrew the Abbasid dynasty. That put Iranians and Berbers of the Tunisian Fatimid dynasty in control of much of the Islamic world, excluding Spain. Qarluq and Ghuzz Turks, including a band loyal to the House of Saljuq, entered Asia Minor. In 1055 A.D., Saljuq Turks, embracing the Sunni faith, replaced the Buwayhid Shi’ites on the throne in Baghdad. These Turkish Moslems chose to retain the Persian administrators. Saljuq Turks in Anatolia established the Sultanate of Rum in 1057. The Saljuq allowed other Turkish tribes to enter Armenia. En route, they devastated Iran. Arab nomads trekking west through north Africa ruined the olive fields which had dated from Carthaginian times. During this turbulent period, the Islamic religion acquired a softer, personal side thanks to an Iranian scholar, Ghazzali, who introduced mysticism into the Sunni tradition. His Restoration of the Science of Religion is Islam’s best-known theological work.
Under fierce attack from western Christians, Islamic rulers held most of their territory during the 12th and 13th centuries A.D. A Turkish officer of the Saljuq empire drove the Frankish crusaders out of their Syrian strongholds and established a new kingdom in Egypt. Salah-ad-Din (Saladin), a Kurdish officer in its employ, later set up his own kingdom. Saladin recaptured Jerusalem from the Franks in 1187. He later repelled avenging Christian armies of the Third Crusade. Saladin’s dynasty was inherited by a consortium of Turkish military slaves, the Mamluks. A more serious threat than the Christian crusaders was the attack on Islamic territories by the Mongol hordes beginning with Genghis Khan’s devastation of Khwarizm in 1220-21. The Abbasid caliph Nasir created a new chivalric order, the futuwwah, to meet this military threat. Moslem kingdoms in Turkey and Iraq fell to the Mongols. The Abassid caliphate was liquidated in 1258 A.D. However, the Golden Horde was unable to conquer Syria or Egypt because of Mamluk opposition. Defying earlier expectations that the Mongols and western Christians might form a grand alliance, the rulers of three Mongol successor states in the western part of the empire later became Moslems.
Nestorian and Monophysite Christians living in Asia Minor, once in the majority, converted to Islam in large numbers during the 14th century. Afterwards, only a small part of the population continued to profess the Christian faith. On the other hand, Moslems were steadily being expelled from the Iberian peninsula as Christian kings advanced. Political adversity did not prevent a great flowering of Moorish culture before its empire disappeared. The last Islamic stronghold at Granada fell to the Christian monarchy of Aragon and Castile in 1492 A.D. The religion of Islam began to make inroads into the African population south of the Sahara desert. In Mamluk Egypt, Coptic Christians were a dwindling part of the population. Arabs infiltrating into Nubia from Egypt gradually converted its people from Monophysite Christianity. The Abyssinian kingdom, south of Nubia, remained Monophysite Christian until the 16th century A.D. Islam also achieved peaceful conversions in Malaya and Indonesia, coexisting with the Buddhist and Hindu religions. Some conversions took place in western China.
Turkish nomads from central Asia had been drawn into the sedentary population of Asia Minor during the 11th century when Saljuq Turks captured the Abbasid empire. Between 1261 and 1300 A.D., other, more warlike Turkish people who had been subjects of the Mongols occupied most of present-day Turkey while the East Roman empire was retaking Constantinople from the western Christians and neglecting its Asian provinces. When Mongol rule was extinguished in 1335 A.D., there was a competition among the Turkish tribes to establish a successor state in the area. Waging war in the spirit of jihad, the Ottomans won that contest by capturing several key cities during the first half of the 14th century. They increased their power by recruiting other Turks for their armies and using Christians to perform economic functions. In the late 14th century, a new barbarian scourge appeared in the person of Tamerlane, self-styled successor to Genghis Khan. He led Moslem armies from Transoxania on a rampage through India, Russia, and the Middle East. Tamerlane’s horde temporarily seized the Ottoman possessions in Asia. Once this threat had subsided and the Asian lands were reconquered, a new revolt against Ottoman rule broke out in Bulgaria, organized by a Sunni mystic. Another took place in Asia Minor a century later. The Ottoman Turks suppressed both rebellions.
The second rebellion, which occurred between 1511 and 1513 A.D., involved Shi’ite sympathizers of Shah Isma’il, founder of the Persian Safavi empire. This empire grew rapidly between 1500 and 1513, reaching its northeastern limit in territories inhabited by Uzbek nomads and its western limit in the Ottoman empire. In a land once predominantly Sunni, Shah Isma’il required his Iranian subjects to adopt the Shi’ite religion. The Safavi army was comprised of Qizilbash soldiers in red headgear who had once lived under Ottoman rule. A spirited group, they belonged to a Sufi religious order of which the Shah was the spiritual head. The Ottoman Turks defeated the Safavi forces at the Battle of Chaldiran in 1514 and later seized Iraq. After Shah Abbas I recovered Baghdad from the Turks in 1623, he built a beautiful new capital at Isfahan. Another Safavi emperor, Shah Jahan, built the Taj Mahal at Agra in India. This empire was overthrown by Afghan nomads occupying Isfahan in 1722. However, it was resurrected after a short time by Nadir Quli, a Turkish soldier who invaded India. Ruling as Shah of the Afshar dynasty, he was assassinated by officers of his own guard in 1747. An Afghan successor state then took possession of Persia and India.
A third Islamic empire, the Mogul, was created in India when a descendant of Tamerlane, Babur, invaded northern India from Afghanistan. Babur defeated the sultan of Delhi at the battle of Paripat in 1526. He seized the cities of Agra and Delhi and soon controlled much of northern India. However, Babur’s son, Humayun, lost this territory to the Bengali Afghan emperor Sher Shah Sur. The Mogul dynasty was established on a more solid footing when Humayun reconquered the kingdom of Delhi in 1555. Humayun’s son, Akbar, expanded the empire to include Afghanistan, Baluchistan, and lands in India as far south as the Godavari river. His royal court became a center of learning and the arts. Since Akbar’s domain included a largely Hindu population, his regime depended on them heavily for military and administrative support. Concerned about the Hindus’ loyalty to a Moslem state, Akbar hosted a series of religious dialogues between representatives of the Moslem, Hindu, Zoroastrian, and Roman Catholic Christian faiths, seeking common ground. In 1582, he announced the creation of a new monotheistic religion called the Din-i-Ilahi, of which he, Akbar, was the prophet. This venture provoked a rebellion in Moslem circles and never caught on.
At the start of the 17th century, the Islamic world was divided into three great empires: the Ottoman empire in Turkey, the Safavi empire in Iran, and the Timurid Mogul empire in India. The Ottoman dynasty, which began in the 14th century, was extinguished in the Versailles peace treaty ending World War I. This Sunni Moslem empire, which had conquered both Mamluk Egypt and the East Roman empire, included most of the territory bordering the eastern Mediterranean and Black seas as well as in north Africa, Egypt, Arabia, Hungary, and the Balkan peninsula. Its capital was Istanbul, formerly Constantinople. The Ottoman rulers followed a policy of excluding their free-born Moslem subjects from top military and administrative positions. Their army was staffed by specially selected slaves called “janizaries”, who typically were Christians abducted as boys from peasant families. As a result, Greek Christians held the reins of government in this Islamic state. Ottoman power was threatened at sea when Portuguese vessels seized their trading ports along the Indian ocean during the 16th century A.D. Czar Ivan IV cut off the empire’s contact with Uzbek Moslems by annexing Kazan and Astrakhan in the 1550s. Currency decline brought on by the Spaniards’ silver-mining operations in the Americas produced an economic crisis.
The Moslem empires in Persia and India expired during the 18th century. After Nadir Quli’s death in 1747, the Afghan Zand dynasty founded by Ahmad Shah Durrani took control of Persia while battling the Hindu Marathas in India. A eunuch, Aga Mohammed Khan, overthrew this regime in 1794 and established the Kajar dynasty, which lasted until 1925. Czarist Russia began to encroach upon Persian territories in the 19th century. Afghanistan was detached from Iran in 1857. The last Shah, Reza Pahlevi, was deposed in 1979 by forces supporting the Ayatollah Khomeini. Akbar’s Mogul successors in India abandoned his policy of tolerance towards Hindus. When Emperor Aurangzeb sought to impose his rule on the southern tip of India, it provoked a furious Hindu counterattack. However, Afghan Persian forces under the Zand dynasty invaded northern India and defeated the Hindu armies in 1758-61. About the same time, British forces under Robert Clive defeated the French. Weakened by wars with the Hindus and Sikhs, the Mogul empire was ruined. The British East India Company ran the Indian government under a succession of puppet regimes. The British crown took possession of India in 1877 and granted this colony its independence seventy years later. Hindu India and Moslem Pakistan became two separate nations.
The Hindu and Buddhist Religions
A most ancient religion developed in northern India during the middle and latter part of the 2nd millennium B.C. The Aryan conquerors of India brought with them a pre-philosophical religion of rituals and prayers intended to achieve practical results. This religion had a pantheon of nature gods and goddesses, not unlike that of the Greeks. The hymns, myths, prayers, and poetic utterances, long carried within the memory of priests, were eventually written down in a collection of Vedic-language literature called the Rig-Veda. This religion had a powerful Brahman priesthood and a caste system which perpetuated social roles. Public ceremonies such as the horse ritual, which dramatized military victories, reinforced Aryan values. Priestly commentaries in the Brahmanas and Aranyakas explained liturgical practice and discussed the mysteries of the universe.
In the last section of the Veda, called the Upanishads, philosophical discussions appear concerning man’s relationship to God. The individual person, or soul, was seen to be experiencing a cosmic journey which includes life in this world. This life is a kind of bondage to delusional existence. The soul of each person is actually identical with the universe as a whole. The Hindu cosmology involved a belief that human souls were born and reborn in cycles of reincarnation. One’s status in the next life depended upon the moral quality of actions undertaken in this and previous existences. The law of karma stated that each action had a consequence in the soul’s future experience. Wrong or hurtful acts might bring lower status or seemingly unjust treatment in a future life, while benevolent actions would be rewarded. Conversely, one’s situation in the present life could partly be explained by one’s activities in previous incarnations. Such an explanation helped to reconcile individuals to their place in the caste system. It created an incentive to behave. The goal, however, was to escape the treadmill of reincarnations and be released into the cosmic whole. Certain yoga exercises or other methods known to the priests helped to hasten that process.
Buddhism is one of two Hindu “heresies” of the 6th century B.C., Jainism being the other. The Hindu salvation, “nirvana” or release from the cycle of earthly rebirths, was not available to ordinary people. If one followed the “way of works”, it was still necessary to be reborn as a Brahman to achieve nirvana at death. If one followed the path of knowledge through the Upanishads, one needed time for contemplation and study. Buddha and Mahavira, founder of Jainism, offered salvation to everyone. “No Brahman is such by birth ... A Brahman is such by his deeds,” Buddha declared. Jainism required strict asceticism and total renunciation of the world. Buddhism offered a “middle way” between asceticism and living in the world. Buddha saw a moral dichotomy between selfishness and love of truth. “Learn to distinguish between Self and Truth,” he said. “If we liberate our souls from our petty selves, wish no ill to others, and become clear as a crystal diamond reflecting the light of truth, what a radiant picture will appear in us, mirroring things as they are, without the admixture of burning desires, without the distortion of erroneous illusion, without the agitation of clinging and unrest.”
Siddhartha Gautama, the Buddha, was born in Nepal in 567 B.C., son of a petty king. A seer informed his father that Buddha was destined to become the greatest king in history. If, however, he saw four things - disease, old age, death, and a monk who had renounced the world - then he would forgo that destiny to become the discoverer of a universal path of salvation. Buddha’s father, wishing to have a royal successor, tried to shield the boy from those experiences, but to no avail. Buddha saw each of the four fateful situations during a ride in the park. He renounced his throne, abandoned his wife and infant son, and spent six years practicing spiritual disciplines including physical self-torture and philosophical study as a wanderer and hermit. Finally, after meditating for seven weeks under a Bo tree, Buddha experienced personal enlightenment in the form of an insight concerning human suffering. He returned to the world to teach this doctrine as an itinerant preacher until his death in 483 B.C. The group of disciples who accompanied him became the nucleus of the Buddhist sangha, a monastic community. Buddha’s followers produced a scripture from the memory of his teachings.
The insights which Buddha had under the Bo tree can be summarized in a set of philosophical principles called the “four noble truths”. They include the ideas that:
(1) Life is filled with sorrow.
(2) Sorrow originates in personal desire.
(3) Sorrow ends when desires end.
(4) The way to end desire is by following the “eight-fold path.”
This path consists of the following elements:
(1) right belief,
(2) right resolve,
(3) right speech,
(4) right conduct,
(5) right occupation,
(6) right effort,
(7) right contemplation, and
(8) right meditation.
If one attains complete extinction of desire, one achieves the blissful state of nirvana. This was an attitude of detachment from the world, which brought freedom from pain. Having reached its spiritual goal, the human soul would then be spared of further rebirths.
Though born in Nepal, Buddha spent most of his life in northeast India in the present-day state of Bihar, near the Ganges river. It was the site of the powerful Magadha kingdom. Buddha often preached in a deer park at Sarnath, which adjoined the holy city of Benares. Like Confucius, he and his followers wandered among warring kingdoms without interference. Neither Buddha nor Mahavira belonged to the Brahman class. Both opposed the caste system and filled the ranks of their followers with men and women of all backgrounds. Hinayana Buddhism, which represented Buddha’s original teaching, grew out of a council to certify the accuracy of these doctrines and set rules for the sangha. The third council was held during the reign of the Indian emperor, Asoka, more than two hundred years after Buddha’s death. He was Buddhism’s great patron.
Asoka (reigned 269-232 B.C.) was the grandson of Chandragupta, founder of the Mauryan dynasty. He conquered neighboring kingdoms until his empire included much of the Indian subcontinent. Remorseful after the bloody conquest of Kalinga, Asoka converted to Buddhism in 261 B.C. He announced that he would cease to pursue military conquest and instead seek conquests of religion. Asoka joined a Buddhist lay order and promoted Buddhism within his realm. He sent Buddhist missionaries to Syria, Egypt, Greece, and Ceylon. While Buddhism was the state religion of the Mauryan empire, Asoka tolerated other religious practices. He promoted a strict ethical code, including the humane treatment of animals. In his zealous attempt to remake Indian society, emperor Asoka resembles China’s first emperor, Shih Hwang-ti, who lived during the same century. Unlike him, however, Asoka did not leave an enduring model of political empire by which the state might become resurrected after a dynastic decline. Instead, his pacifist policies invited political disintegration. The empire fell apart fifty years after Asoka’s death. Yet, his adoption of Buddhism as a state religion set an important precedent for the coming age.
The early Buddhist religion consisted of doctrines, scripture, and traditions associated with the Hinayana branch, sometimes called Theravadin Buddhism. It accepts the Pali canon adopted at the time of Asoka. This philosophically inclined religious path allows only a few persons who strictly follow Buddha’s example of worldly renunciation to achieve nirvana. Someone who marries, has children, and earns a livelihood might become a lay follower of Buddhism (as Asoka was), but that person could not attain the ultimate goal of spiritual release and bliss. To become a mass religion, Buddhism had to provide a means of salvation within everyone’s reach. The Mahayana or “greater vehicle”, which was developed in Bactria at the time of Jesus, offered salvation through a personal savior. It asserted that Buddha had taught an inner circle of followers a higher teaching which allowed anyone to gain release. The idea was that Buddha, showing compassion for other suffering souls, had delayed the time of his own departure from earth in order to save others. Because this saving help from Buddha is universally available, the devotee can remain engaged in worldly pursuits while continuing on the path to nirvana.
Buddhism spread to the Bactrian empire of northwest India after Asoka’s death. The Bactrian king Menander (160-130 B.C.) converted to its religion. Later, the Kushan emperor Kanishka (ca. 100 A.D.) became an ardent patron. There was a strong Greek influence in the Bactrian culture expressing itself through written language, philosophy, and the visual arts. That was the environment in which Mahayana Buddhism developed. Greek philosophy and the Zoroastrian cosmology of Heaven and Hell transformed Buddhism from a philosophical religion into a cult of personal saviors or “bodhisattvas” - Buddha-like personalities embodying the essence of enlightenment. They were ones who had attained Buddhahood but had declined to enter nirvana until other sentient beings preceded them. The Mahayana Buddhist religion readily enlisted the local gods of different regions in that role. Under influence of the Greek visual arts, Buddhism projected itself through statues of the Buddha seated in contemplation; one finds such images in numerous temples and caves. The Mahayana sect taught life after death, which increased its popular appeal. The “compassionate Buddha” aided by other bodhisattvas would arrange passage to that blissful domain for all those who called upon them for help.
The Brahman tradition began to make a comeback in the years after the Mauryan dynasty fell in 183 B.C. The subsequent Sunga and Kanva dynasties brought a Sanskrit revival in Hindustan. Sanskrit, a literary version of the ancient Vedic language, became the sacred language of Hindu texts, while Prakrits, a vernacular-language script associated with Buddhist and Jainist texts, became less widely used. The Gupta dynasty of north India (320-544 A.D.) did much to develop and spread the Hindu culture. Its religion was split into two main branches, Shivaism and Vaishnavism. The former comprised worship of Shiva, a phallic god also associated with death. The latter comprised worship of Vishnu, the Preserver, who has appeared in several human incarnations. Such innovations were made in response to the Buddhist challenge. These gods were like Hindu bodhisattvas. Buddha himself was regarded as an avatar of Vishnu. There was an emotional relationship between the god and his devotees. Sankara, a Hindu philosopher of the 9th century, argued that personal identities were an illusion and so special relationships between persons and gods were unnecessary. Each person was instead identified directly with ultimate reality. Ramanuja, in the 11th century, accused Sankara of being a crypto-Buddhist. In his view, one could still have a devotional relationship with the gods.
The Tamil-speaking part of southern India may have led the way toward this more emotional type of religion. During the 7th century, there was a resurgence of devotional Hinduism in the southern kingdoms of Pandya and Pallava, where Buddhism and Jainism had once been strong. Rock carvings and temples at Mamallapuram and Kanchipuram are among the treasures of Hindu architecture. Sankara, the great theologian, was a native of Kerala in the southwest. Buddhism became extinct in India as a result of devastation inflicted upon its monasteries by foreign invaders beginning with the White Huns in the 6th century. The Pala kingdom in Bengal, which Moslem armies conquered in 1202, was its last stronghold. The Bengalis preferred Tantric Buddhism which emphasized magical rites and worship of divine beings. They passed along this form of religion to the Tibetan people. The Palas dominated northern India during the opening decades of the 9th century but then lost out to the Pratihara dynasty of Rajasthan and central India, who were worshipers of Shiva and Vishnu. Jainism, also patronized by this regime, survived the purging of Buddhism; there are today about two million Jainists in India. However, the revived Brahman religion, Hinduism, gained a firm hold on the vast majority of India’s population.
After Muhammad Ghori defeated an alliance of Rajput kings in 1192, the religion of Islam was added to the Indian religious mix. Possessing a highly developed religion, these Moslems were unable to be absorbed into the Indian culture; but neither was the Hindu population willing to convert to Islam. Consequently, India presented the paradoxical case of a state whose rulers professed one religion and whose people observed another. Out of respect for a superior civilization as well as political expediency, Moslem rulers of India felt obliged to designate their polytheistic Hindu subjects as “peoples of the Book”. Emperor Akbar formed an alliance with the Hindu Rajput kings to keep the power of his Turkish commanders in check. He abolished the special taxes on Hindus and gave them permission to build Hindu temples. Moslem clerics regarded this as apostasy. A Mogul successor, Aurangzeb, undid these concessions and, in the process, provoked a furious counterattack by the Hindu Marathas. Religious teachers or poets such as Nanak and Kabir synthesized elements of both religions. Their doctrines appealed to lower-caste Hindus as did Islam. Higher-caste Hindus were recruited into the armed forces and civil service of the Persian-style Moslem governments, following a practice of Islamic administrations everywhere.
The Spread of Indian Religion to Lands outside India
The Kushan empire, which united Bactria and northwest India in the 1st and 2nd centuries A.D., was the epicenter of emerging Mahayana Buddhism. It included lands in western Afghanistan and Uzbekistan abutting the eastern part of China. Mahayana Buddhism was adaptable to local creeds and traditions. There was a ripe combination of circumstances for this religion to penetrate Chinese culture starting in the 2nd century A.D. Trade routes from western China to the Middle East and Europe ran through the Tarim basin and Soghd, southeast of the Aral sea, which were located just north of the Kushan empire. Buddhism may have seeped into China from that region in the form of neo-Sanskrit documents and works of visual art in the Gandharan Greek style. The Chinese and Indian modes of thinking were quite different. Chinese thought was expressed concretely and in a monosyllabic language. Indian thoughts were more abstract. Of the Chinese philosophies, the Buddhist mentality came closest to Taoism, so the early Buddhist writings frequently used Taoist concepts and terminology. Numerous scholars were at work translating Buddhist scriptures into Chinese.
When the eastern Han dynasty fell in the 3rd century A.D., there was a spiritual vacuum in China which Mahayana Buddhism filled. The Confucian ideology was discredited by its close association with the former corrupt imperial administration. The Taoists were discredited by their passivity in the face of public need. Zealous Buddhist missionaries were met by Chinese willing to listen to new ideas. Between 399 and 414 A.D., a Chinese pilgrim named Fa-hsien traveled to India to study Buddhism at its source. An Indian scholar named Kumarajiva, taken captive in 382 by a Chinese raiding party, spent his remaining life in China translating Buddhist classics. The Chinese Buddhists created their own sects. One was the “pure land” school which offered escape to a western paradise through faith in the bodhisattva Amitabha. Another was the Ch’an (Zen) school which stressed contemplation and personal discipline. The Buddhist monasteries acquired wealth. Emperors of the Sui and T’ang dynasties were personally attracted to Buddhism though they tolerated other religious philosophies. However, in a time of troubles the Confucians and Taoists conspired to curtail Buddhist activities. Between 842 and 845 A.D., the Chinese imperial government cracked down on Buddhist institutions. Monks and nuns were defrocked in large numbers. Property was seized from the monasteries.
Buddhism became the dominant religion in lands outside India which were influenced primarily by Indian or Chinese culture. The civilization of India began to spread towards southeast Asia and Indonesia during the 1st century A.D. That trend accelerated in the 3rd century as the Gupta society radiated cultural influence. Tibet came into India’s cultural orbit when a Tibetan king who invaded northern India after the death of emperor Harsha in 647 developed a script in the Indian style for the Tibetan language. That script was used to translate Mahayana Buddhist scripts from Sanskrit. Tibetan or Tantric Buddhism later became the religion of nomadic peoples living in Manchuria and Mongolia. It tamed the warlike spirit of those peoples, eliminating them as a threat to civilized societies. Buddhism first came to Ceylon in the 3rd century B.C. Missionaries from the Pala kingdom brought the Mahayana religion to Java in the 8th century A.D. In 1190, monks who had visited Ceylon introduced Hinayana Buddhism to Burma and Cambodia. Vietnam’s adoption of Mahayana Buddhism, in contrast with other southeastern nations, reflects Chinese influence.
China also exerted cultural influence upon the neighboring lands of Korea and Japan. Emperor Han Wu-ti established a colonial output in Korea during the 2nd century B.C. Although the Koreans later expelled the Chinese from that outpost, their culture remained. In the 5th and 6th centuries A.D., a large number of Koreans migrated to Japan, bringing with them the Korean version of Chinese Mahayana Buddhism. The Buddhist religion was introduced to Japanese society in the 7th century A.D. Block printing had been invented in T’ang China to mass-produce Buddhist and Confucian texts. Some of this literature made its way to Japan where scholars adapted the Chinese characters to spoken Japanese. The resulting script is based on associations between Chinese visual characters and syllabic sounds in the Japanese speech of that day. Japanese Buddhists developed simplified versions of Chinese religious teachings to appeal to a wider audience. Zen Buddhism, taken from the Ch’an school, was introduced to the samurai court at Kamakura in 1191. Its strict mental and physical discipline was attractive to soldiers. Honen and Shinran Buddhism were mass cults which promised entrance to a heavenly paradise to persons who repeated the name of the bodhisattva Amida. The Nichiren sect taught salvation by chanting praise of the Lotus Sutra.
The rival Buddhist sects established kingdoms of their own. They fought one another employing techniques of the martial arts. Buddhist monks trained squads of Ninja warriors to infiltrate enemy headquarters and kidnap or assassinate individuals. Ieyasu, last of the three great shoguns of the 16th century, once hired these warriors to kidnap the children of a rival warlord so that he would have a bargaining chip to offer in exchange for his own captive children. However, he and his successors promoted the neo-Confucian philosophy because they believed its ethical doctrines would strengthen their regime. Portuguese missionaries brought Christianity to Japan in the 16th century. Nobunaga, first of the three shoguns, tolerated Christianity because it offset Buddhist power. His successor, Hideyoshi, was of another mind. He mistrusted the western missionaries believing that religious conversions might precede a political takeover as had happened in the Philippines. Persecution of Christians began under Hideyoshi in 1597. When a rebellion broke out in the Catholic community of Shimabara in 1638, the government suppressed both Christianity and foreign trade. Buddhism was not suppressed. Indeed, all Japanese were required to register as a lay associate of a Buddhist temple to prove they were not Christians.
Hinayana Buddhism spread from Burma into the neighboring countries of Thailand, Laos, and Cambodia during the 13th century, ousting the Hindu and Mahayana Buddhist religions. The Thai people had come from western China, but they converted to the Burmese type of religion. A dynasty of god-kings influenced by Indian civilization had ruled the Khmer empire in Cambodia for over five hundred years. The Vietnamese carried their Chinese-style Mahayana Buddhism with them as they conquered the Champa kingdom to the south. The Chams then became Moslem. The Srivijaya empire on Sumatra, founded in the 7th century, and the Sailendra empire, founded in Java in the following century, were both Mahayana Buddhist; however a Shaivist Hindu regime, the Sanjayas, arose in east Java in the late 8th century to replace the Sailendra kings. The Empire of Majapahit was founded in Java in 1293 in the aftermath of the Mongols’ naval defeat. This far-flung empire was founded by a Mahayana Buddhist prince, but Hindu and animistic religious influences were also strong. In the 15th century, Islamic religion from India poured over Malaya and the Indonesian archipelago to form the last religious layer. Rulers of port cities and coastal principalities found it advantageous to adopt the same religion as the Moslem merchants on whose trade their livelihoods depended.
An alluring religious possibility in the 13th century was that the world’s largest political empire might convert as a block to whatever religion managed to win over its Mongol rulers. Although the Mongols were originally shamanists, Kublai Khan’s mother was a Nestorian Christian. The great Khan asked Marco Polo’s father and uncle to invite the Pope to send a delegation of learned Christians to his court to persuade him of the merits of their religion. Nothing came of that invitation. Kublai himself preferred Buddhism, especially Tibetan Lamaism. The Mongols converted to “Yellow Church” Buddhism, associated with the Dalai Lama, in the late 16th century, although several of their successors in the west converted to Islam. However, a former Buddhist monk, Chu Yüan-chang, led a rebellion against the Mongol dynasty in southeast China and, in 1368, proclaimed himself emperor of the Ming dynasty. Nestorian Christianity was expelled from China. Neo-Confucianism became again the state religion. In the mid 19th century, a religious visionary named Hung Hsiu-ch’üan, who believed that he was the younger brother of Jesus Christ, aroused a horde of peasants and unemployed workers to rebel against the Manchu government and non-Christian religions. These soldiers of the “Taiping Rebellion” controlled the Yangtze Valley for more than a decade but, with western help, were suppressed.
A SHORT HISTORY OF
CIVILIZATION III
Special Circumstances: Its Origin in Europe
The next epoch of world history is focused particularly on western Europe. Its civilization began there before spreading to the rest of the world. The European ascendancy at the beginning of this epoch created an imbalance which produced a backlash at the end; for it was intolerable that one people’s history should be the history of all mankind. In terms of our theory, a critical event was the introduction of a new cultural technology in western Europe, which was Gutenberg’s invention of printing with movable type. Although the technology of printing had previously been employed in China and Korea, it did not take hold in the East Asian societies to the extent that it did in the West because of differences in script. Printing did not catch on in Islamic society because of religious and cultural restrictions. That left western Europe, a relative latecomer in acquiring the technology, to make full use of it. The particular culture which appeared then northern Italy was the embryo of a future civilization in Europe and the rest of the world. Its direction was secular and commercial.
Thawing Religious Belief
In the late Middle Ages, European people were caught in the grip of a religious ideology that permeated community life. A thick dogma had settled down upon the society. Philosophical conceptions once incandescent in Greek or Jewish minds had long since cooled in frozen shapes. The eternal truths of Christianity, unchallengeable by reason, were embalmed by faith. Beneath the surface, life yet stirred. It was, in fact, a vigorous, healthy stirring of the human spirit. Despite its spiritual veneer, medieval life was bustling with a lusty materialism. Religious offices, indulgences, and relics of saints were offered for sale. Holy names, places, and things were regularly blasphemed. The church became a “trysting place” for young lovers. Obscene pictures were peddled there on festival days. Johan Huizinga has written that with “an enormous unfolding of religion in daily life ... all that is meant to stimulate spiritual consciousness is reduced to appalling commonplace profanity ... Holy things ... become too common to be deeply felt.”
Then, like a glacier, the religious culture of the late Middle Ages, or what would be the springtime of CivIII, began to melt. The solid chunks of dogma, exposed to reason, developed cracks here and there. As some of these cracks became wider, tiny streams of fresh thinking slipped through. In time, the streams became torrents, which burst the dam of faith. One can associate this process with a few persons of strong intellect and determination. Such a person was Peter Abélard (1079-1142), a teacher of theology at the University of Paris. Another was Francesco Petrarca or Petrarch (1304-1374), an Italian scholar and poet. Others include: Emperor Frederick II (1194-1250), a freethinking monarch who defied the Pope; Roger Bacon (1214-1294), an English monk who developed the theory of experimental science; and Dante Alighieri (1265-1321) who wrote the Divine Comedy. The Roman church meanwhile was losing its moral credibility. The “Great Schism”, which produced rival Popes in Rome and Avignon, struck at papal legitimacy. The public was becoming disgusted with corrupt clergy and the need to raise increasing sums of money. There was an air of violence and coercion about the church, so unworthy of its founder.
The Seeing Revolution
The idea which we have of the Renaissance is that, being a time of cultural rebirth, it brought forth new discoveries of the world. Like a new-born child, humanity again learned to see. The civilization of western Europe was emerging from its cocoon of medieval piety to embrace the humanist principle that “man is the measure of all things.” The Renaissance culture rejected philosophical speculations in favor of a new worldliness which sought knowledge based on natural observation. “Mental things which have not gone in through the senses are vain and bring forth no truth except detrimental,” Leonardo da Vinci declared. The previous epoch, begun with a philosophical revolution, had valued “things unseen”. Its religious culture considered ideas to be a source of goodness and truth while the body was a source of sinful weakness. Believers were asked to have faith in God’s promise and quell doubts born of worldly experience. In contrast, the third civilization began with a revolution in seeing. Its redirected attention to things of the world represented a complete reversal of the previous culture.
The first fruit of this seeing revolution was a host of beautiful objects created by artists whose eyes were open to new possibilities of color, shape, and composition. The human body as painted by artists became a object of pleasure and grace. Mind was subjected to belief in what could be found in the physical world. Where Byzantine art had produced monochromatic and rather ethereal representations of human figures, the north Italian art pioneered by Giotto fleshed out its subjects. The names of Michelangelo, Leonardo, Raphael, Titian, Boccaccio, Brunelleschi, Masaccio, and others suggest the creation of beautiful colors and shapes in a balanced composition. North of the Alps, another group of talented artists including Jan van Eyck, Pieter Brueghel, and Albrecht Dürer were creating works rich in realistic detail. Realism was the theme of Renaissance art. Leonardo studied human anatomy to create more realistic visual representations of his subjects. Alberti, Brunelleschi, and others developed the technique of perspective to suggest how objects at various distances might appear in real life.
In literature, the veil was lifted from works of classical authors known by reputation or in translations from Arabic. Petrarch taught himself Latin and Greek to be able to read ancient manuscripts in the original. He came to feel that he knew their authors personally. His famous love poems dedicated to Laura abandoned medieval courtly traditions in favor of presenting the image of a real woman. Petrarch’s example inspired a revival of interest in classical Graeco-Roman texts. Where medieval scholars had been careless in copying or compiling texts, Petrarch respected the integrity of original compositions. He was concerned with the authenticity of texts and with discovering an author’s real intentions. Dante wrote the Divine Comedy in his native Tuscan language rather than in Latin, which made poetry accessible to many more people. Vernacular translations of the Bible were a revolutionary event. They challenged the authority of the Roman church by allowing the masses of people to read God’s word for themselves and seek salvation in truths of the Bible rather in church-controlled rituals.
Medieval society had sought truth about nature in the scientific writings of Aristotle. Roger Bacon challenged this attitude in promoting the alternative approach of experimental science. “Cease to be ruled by dogmas and authorities; look at the world,” he said. Following Bacon’s prescription, empirical science finds truth in observed patterns in the natural world and in theories that can be experimentally tested. Alchemists and astrologers had long been gathering facts about nature. This empirical orientation, combined with mathematics and a willingness to give up beliefs contradicted by natural observation, led directly to modern science. The Polish astronomer Nicholas Copernicus conceived the modern scheme of the solar system, which contradicted the earth-centered scheme of Ptolemy. Galileo conducted an experiment to see if Aristotle’s teaching was true that objects would fall to earth at different rates depending upon their weights. He found, to the contrary, that differently weighted balls dropped at the same time from on top of the Leaning Tower of Pisa hit the earth at the same time.
Leaving behind a religious culture laden with symbolic or invisible meanings, Europeans in the Renaissance period discovered the physical world. An Italian navigator, Christopher Columbus, persuaded the Queen of Spain to finance a voyage across the Atlantic to reach what he supposed would be east Asia. But, instead of the Indies, he reached the shores of a strange new land. There, in the western hemisphere, Columbus and his European companions found a different race of people, new types of food, unknown diseases, tobacco, furs, timber, inland waterways, deserts, silver and gold in abundant supply. Columbus had heard that Japan had inexhaustible supplies of gold. A Florentine map, badly underestimating the size of the earth, placed Japan just west of Europe. Magellan’s voyage around the earth revealed the existence of a new continent which could be traversed at its southernmost point. The Spanish and Portuguese voyages of global exploration were another manifestation of the spirit of worldly discovery feeding the new civilization.
A commercial spirit drove this age of discovery. There was a strong market in Europe for oriental nutmeg and spices, which could preserve meat and add exotic flavoring, and for the Chinese silk used in princely garments. When the Ottoman Turks captured Constantinople in 1453, a hostile empire blocked the overland trade routes to China that were previously available to European merchants. Portuguese navigators found an alternative route by sea around the southern tip of Africa and soon captured Moslem trading posts along the Indian Ocean. Columbus was a native of Genoa. Genoese trade routes ran between Kaffa in the Crimea and ports in the Adriatic and western Mediterranean seas. The Venetians, who dominated trade with the Ottoman empire, also sent ships into the north Atlantic. A third trading bloc maintained a network of ports in the Baltic and North seas. Northern Italy and Flanders, centers of textiles manufacturing, were the most active commercial regions. The great wealth generated by trade, banking, and manufacturing created a demand for various kinds of luxury goods. Money and luxurious objects were no longer something to be despised or feared as a temptation to the soul. The appeal of wealth was visual and immediate, not other-worldly.
Luther’s Protest
The growing concern with wealth and beautiful objects affected the Roman church, which was headquartered in central Italy. The church needed large amounts of money to support its worldly projects. Unable to raise enough funds from management of its lands and other properties, it resorted to such fundraising methods as selling church offices and papal indulgences. The Renaissance popes had frequent need of moneylenders; Innocent IV called them “the peculiar sons of the Roman church.” The Papal fortunes became intertwined with those of the Borgia and de’ Medici families. Pope Callistus III openly practiced nepotism. Pope Alexander VI, his nephew, was the father of Cesare Borgia, Machiavelli’s model of a ruthless prince. In the 1490s, the worldliness of the church found a determined opponent in a Dominican monk, Girolamo Savonarola, who urged his fellow Florentines to forsake artistic images, abandon luxuries, and return to simple Christian living. Alexander VI ordered him to desist in his anti-Roman preachings. Savonarola refused. For this he was excommunicated and burned at the stake.
Julius II launched a massive project to rebuild St. Peter’s church. This project, which required a century and a half to complete, engaged artists of such talents as Bramante, Michelangelo, Raphael, and Bernini. Such an undertaking required new fundraising efforts. In 1509, the Pope instituted a special Jubilee indulgence. A Dominican preacher named Johann Tetzel arrived in Saxony in October 1517 to promote a new dispensation of indulgences granted by Leo X. This was the event that inspired Martin Luther to post the “95 theses” on the church door at Wittenberg. His manifesto was widely distributed. While Luther was branded a heretic, the pressure to reform the Christian church was now too strong for him to be treated as a politically isolated monk such as Savonarola had been. In 1521, Pope Leo X issued a bull condemning Luther’s views and threatening him with excommunication within sixty days if he did not submit to Roman authority. Luther and his friends burned a copy of it in a bonfire. The Saxon elector Frederick III gave Luther sanctuary in his castle at Wartburg.
A new Christian faith, supported by a powerful group of north European monarchs, took shape along the lines of Luther’s religious arguments. Among its tenets were disbelief in the supernatural powers of the Mass and other church sacraments, in the ability of sinners to win salvation by good works, and in the presumed power of Roman priests to mediate between God and humanity. “Justification by faith” was, in Luther’s view, the sole means of salvation; however, some persons were predestined to believe and be saved while others were not. While the Roman church claimed authority on the basis of ecclesiastical succession, Protestants maintained that “scripture alone” was the basis of religious authority and truth. The ability of each believer to read and interpret passages in the Bible kindled a spirit of individualism. Individual believers could go straight to the source of Christian teaching and see what Jesus had actually said. Thomas Hobbes observed that “after the Bible was translated into English, every man, nay, every boy and wench that could read English, thought they spoke with God Almighty.” An age of vigorous public discussion followed as pamphlets were printed to defend the Protestant or Catholic cause. Still, individuals were not free to choose their own religion. The temporal rulers of Europe had the power to decide what faith their subjects would adopt.
As strong-minded individuals preached their own versions of Christian truth, western Christianity became hopelessly split into denominations. By 1650 A.D., there were at least 180 different Protestant sects. Luther’s own followers, concentrated in Germany and Scandinavia, were a conservative group compared with the Calvinists, Zwinglians, Anabaptists, Mennonites, and Quakers. Zwingli differed with Luther over interpreting the Mass. The Anabaptists repudiated infant baptism. The most important Protestant figure after Luther himself was John Calvin, theocratic leader of the city of Geneva. His teachings are presented in a treatise, The Institutes of the Christian Religion. Theologically, Calvin believed in the moral depravity of the human race, which would lead to eternal damnation but for Jesus’ saving grace. The Calvinist doctrine of eternal damnation held that from the beginning of time God had ordained that a person’s soul after death would be either saved or damned. Regardless of one’s efforts, nothing could be done to change that determination.
Forces working within the Roman church might have prevented the Protestant rupture had they acted more promptly to restore its commitment to poverty and Christian service. Bishop Caraffa, later Pope Paul IV, led a group of Italian prelates seeking to reform church practices even before Luther’s defiant act. The Council of Trent, held intermittently between 1545 and 1563, reviewed church sacraments and beliefs in light of Protestant criticisms. The so-called “Counter-Reformation” was a conservative movement to revitalize and strengthen the church. Its most important figure was St. Ignatius Loyola (1491-1556), a Spaniard who founded the Jesuit Order in 1540. The Jesuits emphasized spiritual discipline and education. Self-styled soldiers of the church, Jesuit missionaries were prominently involved in evangelizing the native peoples of the Americas. They also undertook missions to India and the Far East. Thanks largely to the Jesuits, Spanish and Portuguese colonization always retained a hard religious edge. English and Dutch colonists were more willing to limit themselves to purely commercial objectives. It is ironic that the spirit of commercialism was stronger in Protestant than Catholic countries considering that Luther’s complaint had concerned excessively commercial practices within the church.
Scholars have noted a correlation between Calvinism and commercial progress. Perhaps the best-known explanation is that presented in Max Weber’s treatise, Protestant Ethic and the Spirit of Capitalism. Weber argued that persons such as the New England Puritans, who were raised in a strongly Protestant culture, were more likely than others to regard money-making as a worthwhile pursuit. Theirs was an ethical imperative to achieve wealth. Weber speculated that, although Calvinist dogma held that good works were ineffectual in achieving salvation, the Calvinists craved reassurance that they were among God’s elect. While strength of faith or spiritual conviction would be for any Protestant a sufficient means of salvation, Calvinists believed that “faith had to be proved by its objective results.” One increased the sense of conviction in one’s own salvation by experiencing at each moment a life of “systematic self-control”. Therefore, the righteous persons of this religious persuasion were ascetics actively engaged in the world - Christian businessmen like John D. Rockefeller - driven to make money not for the sake of enjoying physical comforts but deriving satisfaction from the money-making itself.
Commercial Rivalry between the North Atlantic Nations
The European voyages of discovery beginning in the 15th century introduced an era of exploration and colonization in distant lands on earth. Each nation developed territorial interests. Portugal and Spain made the first claims to new territory by virtue of their navigational feats. In 1493, Pope Alexander VI divided the entire world outside Europe between the two Iberian nations on condition that they convert the native peoples in their respective possessions to Christianity. A Papal bull (later adjusted by treaty) gave Portugal lands east of a longitudinal line running through present-day Brazil, and Spain lands west of that line. In the west Pacific region, a similar line assigned the Molucca (Spice) islands to Portugal and the Philippines to Spain. That gave the two Iberian nations a big head start in the race to colonize and commercially exploit the non-European world. The focus of political rivalry shifted from the Mediterranean area to the North Atlantic and to the larger-sized nations located along its shores.
Portugal made the first significant discovery when, in 1488, ships commanded by Bartholomeu Dias rounded the Cape of Good Hope in southern Africa. In 1498, a Portuguese flotilla under Vasco da Gama’s command sailed around the Cape and traveled to the west coast of India. The Portuguese found that Arab merchants controlling trade along the Indian ocean were uninterested in the type of merchandise that they carried. So the Portuguese, equipped with muskets, returned several years later and took the Arabs’ trade outposts by force. They seized Goa in 1510, Malacca in 1511, and Hormuz in 1515. Portuguese merchants controlled the trade in oriental spices for the remainder of the century. Along with the traders went Jesuit missionaries intending to convert the Asian peoples to Christianity. St. Francis Xavier founded missions in west India, the Molucca islands, and Japan between 1541 and 1552. Matteo Ricci traveled to China in 1582, where for the next three decades he translated Christian scriptures into Chinese, wrote cultural treatises, and became a mathematician and astronomer at the Ming court in Peking. The Iberian Christians were later expelled from that region when church officials refused to compromise on theology to accommodate local traditions.
In 1492, Christopher Columbus and his Spanish companions landed on the island of San Salvador in the West Indies (mistakenly believed to be a part of India) in search of gold. What little gold they found there gave rise to further explorations and a system of slave labor to work the mines of Hispaniola. In 1519, Spanish adventurers led by Hernando Cortés came in contact with a populous nation in southern Mexico. Greatly outnumbered, they conquered the Aztec empire in less than two years thanks to their boldness, superior equipment, help from peoples hostile to the Aztecs, and the lucky fact that Cortés’ arrival on a certain day and year convinced the Aztec emperor, Moctezuma, that he was the reincarnation of the god Quetzalcoatl. Another Spaniard, Francisco Pizarro, conquered the Inca empire in South America between 1532 and 1535. Both empires were rich in silver and gold. The Spanish set about to exploit that resource systematically.
Tons of precious metals from American mines were shipped to Spain each year. Instead of becoming rich from this cargo, however, the Spanish monarch found that he was becoming steadily poorer. Mining and shipping operations, military protection for the galleons, and loss of revenues to private mine owners were costing the crown more than the gold and silver was worth, especially considering that the large increase in the supply of these metals brought a reduction in price. Spanish silver was causing severe monetary inflation not only in Europe but also in the Ottoman empire. In response to growing debts, the Spanish parliament imposed a ban on shipments of precious metals from the country. The king then suspended debt payments, issuing bonds to his creditors. As state finances continued to deteriorate, Philip II and his advisors blamed the situation on foreign merchants, usurers, and speculators. They required that the American colonies buy only from Spain. They imposed duties on goods imported from the Americas, escort fees for the warships, and other new taxes. Meanwhile, attendance at once brisk trade fairs languished. So much land was taken out of wheat cultivation that Spain could not feed its own people.
To protect its interests, the Spanish government tried to exclude English merchants from trade with the New World. Matters came to a head in 1567 when the Governor of Vera Cruz seized English ships filled with African slaves and arrested the crew. Two ships escaped, commanded by John Hawkins, the squadron leader, and a young captain named Francis Drake. Drake and a crew of English pirates later went on a rampage against Spanish towns along the Atlantic and Pacific coasts. Piracy had long been a way of life in the Atlantic. Merchant seamen were given a royal charter to engage in armed piracy against Spanish vessels in the series of wars between Spain and France beginning in 1521. While French pirates plundered at will, their English counterparts held back because the daughter of Henry VIII, the future Queen Mary, was married to Philip II of Spain. After Mary’s death in 1558, that constraint was removed. For a time, the new queen, Elizabeth I, steered a delicate course between supporting English seamen and keeping the peace with Spain. Her decision to confer knighthood on Francis Drake aboard his pirate ship in 1580 signaled an end to that policy. Philip II responded in 1588 by dispatching the Armada, a fleet of 130 ships, to conquer England. The smaller but more agile English fleet defeated it.
After ousting the Arabs, Portugal controlled the lucrative trade in oriental spices. The Dutch began to challenge their position at the end of the 16th century. Philip II, who after 1580 was king of Portugal as well as Spain, attempted to punish his rebellious subjects in the Netherlands by ordering Dutch ships seized if found in Spanish or Portuguese waters. Jan Huyghen van Linschoten, a Dutch merchant who had spent five years in Goa, published a book in 1595 suggesting how his fellow countrymen might break the Portuguese monopoly on trade with the Indies. His idea was that the Dutch ought not to challenge the Portuguese military outposts in India but, instead, to set up trading posts in the relatively undefended areas of Indonesia and Malaya where the spice trade originated. In the same year, Cornelius Houtman embarked on a voyage with four ships to Bantam and the Moluccas which succeeded in breaking the Spanish-Portuguese naval blockade. That triggered a series of Dutch expeditions to the South Seas.
In 1602, the Estates General of the Netherlands gave the United East India Company the right to enter into treaties with Indian princes, raise troops, build fortifications, and appoint governors and judges. Its directors promptly sent fourteen sailing ships to Asia. A stone trading post established at Bantam (Java) became a base for further expeditions to neighboring areas. Other Dutch fleets, which were equipped with powerful guns, attacked Portuguese forts, negotiated friendship treaties with Indian rulers, and blockaded Goa. The Dutch defeated Spanish and Portuguese fleets in several naval engagements. The hard-pressed Spanish monarch signed a 12-year armistice with the Dutch in 1609, allowing free trade in Asia. The Dutch used this opportunity to build trading posts and forts throughout Indonesia. They developed a flourishing trade in spices based not on force but on favorable purchases and sales in the open market. The armistice with Spain was not renewed in 1621. Portugal and Spain again barred Dutch merchants from their harbors. The Dutch then imposed a tight naval blockade of Portuguese trading ports in Africa and southern Asia. When peace was established in 1645, Portuguese trade was ruined. The Dutch ruled the seas.
England’s interest in the New World began with John Cabot’s voyage to North America in 1498 in search of a northwest passage to the Pacific. Failing in that purpose, Cabot discovered what became the world’s richest cod fishery off the coast of Newfoundland. In the 16th century, fishing for cod turned out to be more profitable than mining silver. Sebastian Cabot, John’s son, made another attempt on England’s behalf to reach the Far East by a sea route. In 1553, he set sail on a voyage around the northern part of Norway with a fleet of three ships. Two ships perished in the Arctic Sea but the third reached the current site of Archangel. From there the ship’s captain traveled overland to Moscow where Czar Ivan the Terrible received him cordially. The Russians realized that they had found an alternative source of European goods. The English found a source of furs and a potential market for their woolen products. The Muscovy Company, first of England’s great colonial shareholders’ companies, was formed in 1555 to take advantage of that opportunity.
Several other companies were established to represent English commercial interests in diverse parts of the world. The Levant Company obtained a royal charter to trade with the Ottoman empire in exchange for an annual duty of at least 500 pounds. England set up consulates in Syria and Egypt. Despite official obstacles, its merchants did a brisk and profitable trade with countries bordering the Mediterranean sea. However, the defeat of the Invincible Armada and England’s subsequent retaliation against Spanish ports brought an end to lawful trade with the Iberian countries and their colonies in Africa and the Americas. No English ship dared sail past the Straights of Gibraltar into the Mediterranean sea. Therefore, merchants of the Levant Company sought other channels for trade in oriental goods. John Newbury persuaded its management to send him and five companions on a mission around southern Africa. Though unsuccessful, this expedition gave the English a close look at the riches of India. They learned intimate details of conditions in China, as well as boat construction, by capturing a Portuguese vessel off the Azores in 1592. Queen Elizabeth sent a fleet of ships to explore trade with China which, unfortunately, sank during a storm.
The English persisted in pursuing trade opportunities. The East India Company, an association of English merchants, received a charter from the queen on December 31, 1600, granting them a monopoly on trade in the Eastern Hemisphere. At first, this venture was hindered by falling prices for pepper and spice, by opposition from its European rivals, and by the fact that south Asian peoples had little use for English woolen goods. After the Dutch made it clear that they would not tolerate an English presence in Indonesia, the East India Company decided to concentrate on trade with India where it was hoped they might acquire goods to be traded later for Javan spices. An Englishman named Midnall had made friendly contact with the Mogul emperor, Akbar the Great. Officials of the East India Company followed up on that visit by sending another fleet to India. The Portuguese, entrenched at Goa, blocked English efforts to establish trading posts in western India. However, the English commander, Hawkins, traveled to Agra to plead his case with the Indian emperor. Emperor Jahangir, Akbar’s successor, granted the English all the commercial privileges that they requested. This was the beginning of a long and fruitful association between the British East India Company and the Mogul dynasty.
Although the papal bull of 1493 had granted Spain all American lands except for Brazil, the Spanish crown was too busy with administering its Mexican and South American colonies to pay much attention to North America. Other European nations, starting with France, explored the coastal and interior waterways of this continent. English kings in the 17th century granted royal charters which allowed individuals to settle and govern tracts of land in North America so long as their rule did not conflict with English law. One such colony was established at Jamestown in 1607, and another in Massachusetts in 1620. Groups of religious dissenters who felt oppressed by the Church of England flocked to the New World. The first major influx brought more than 20,000 Englishmen, mostly Puritans, to colonies in New England before the English Civil War erupted in 1642. Other religious minorities, including Quakers and Roman Catholics, were given opportunities for exile to American colonies. During the reign of Charles II, the English government began to tighten its regulation of those colonies. The Navigation Acts, enacted between 1660 and 1696, required that American commerce be transported in ships built and operated by English nationals. The Crown also altered or revoked many colonial charters, placing its North American subjects under the control of appointed governors.
The Spanish possessions in America were on a still tighter leash. From the beginning, the King of Spain ruled his American empire by a thick, detailed code of laws. Persons wishing to emigrate to the Americas had to apply for a permit. A Supreme Council of the Indies, headed by the king, made the major administrative decisions in Spain, while viceroys, judges, and other bureaucrats carried out its mandates on site. The Pope gave the Spanish monarch authority over religious as well as political matters in return for a commitment to maintain the church. Special attention was paid to the treatment of native peoples. Generally, the church acted to protect them from severe exploitation and to gain conversions by persuasion rather than by force. On the other hand, the native population declined precipitously during the first century of Spanish rule. In Portuguese Brazil, the colonial government was more loosely organized. The real power lay in the hands of the large plantation owners. The European and native American peoples mixed more freely in Latin America than in the English colonies to the north, producing a hybrid race. By 1800, the population of Spanish America had reached 18 million. Its cities rivaled those of Europe both in size of populations and in culture and wealth.
French settlements in the Americas were concentrated along the St. Lawrence river in Quebec. French fur traders in the Great Lakes region obtained beaver pelts from Indian trappers in exchange for rifles, knives, and steel implements. These pelts were used for the high-priced beaver hats worn by European nobility. Louis XIV of France, who set the pace of fashion in Europe, had his eye on European conquests rather than overseas colonies. He built ornate palaces such as the royal residence at Versailles. The king’s finance minister, Jean Baptiste Colbert, had ambitious ideas about making France an economic power. His memorandum written in 1664 recommended the creation of a French East and West Indies Company. Colbert was the architect of a planned economy aiming to make France economically self-sufficient. French textiles and other manufactured goods were prized objects of trade. Colbert hoped to obtain American silver from Spain in payment for its trade deficit with France which could be used to purchase spices in Asia. However, the Spaniards hoarded their silver. Colbert’s mercantilist strategy of restricting imports and pushing exports became self-defeating when other nations followed suit. Ultimately, his micromanagement ran the French economy into the ground. France was bankrupt when Louis XIV died in 1715.
Colonial Trade
Wars impose heavy costs upon national economies. To raise money for its protracted wars against France, the English government in 1694 gave a group of merchants the right to issue bank notes to the extent of their invested capital. This bank immediately loaned the government its entire capital of 1.2 million pounds sterling. It then issued paper money good for purchasing precious metals and foreign bills of exchange. That is how the Bank of England began. After Louis XIV’s death, a Scottish financier named John Law convinced the Duke of Orleans to allow him to establish a similar institution in France. The French economy was then facing a severe monetary squeeze. Law proposed to restore money and credit by issuing bank notes backed by his own capital. The Banque Générale Law et Cie was created for that purpose in May 1716. While acceptance of the bank notes was voluntary, Law’s scheme was successful. The French government took over the bank in 1718. Law, as controller general of finances, merged the royal bank with a stock company which he had formed to promote land sales in Louisiana. There was a frenzy of speculation which increased the price of the stock to unsustainable levels. When the price collapsed in December 1720, Law fled the country. The English “South Sea Bubble” burst about the same time.
While financial speculation had brought Law’s downfall, the idea behind his Mississippi company was basically sound. Law intended to attract European settlers to the French Louisiana territory where they might cultivate crops that could be marketed in Europe. The chief crops were coffee, sugar, and tobacco. Law would encourage the settlers to grow these crops on plantations and would extend credit to them to purchase necessities from Europe while the crops matured. He would also use credit for the settlers to purchase African slaves to work the plantations. Law consolidated all the French overseas trading companies into one, purchased the colony of Louisiana, secured the tobacco monopoly, and expanded the French slave trade. Although his Mississippi Company operated for only two years, those years marked a shift in the direction of European commerce. Previously, European merchants had focused upon the trade in spices from the East Indies. Law’s enterprise diverted attention to the West Indies. Europeans acquired a taste for those exotic goods which, Ernst Samhaber wrote, “introduced into the chilly West the whole seductive warmth of the topics and the sweet ease of life in a sunny climate.”
France and England were now on a collision course. Once united in opposition to Spanish power, the two nations clashed when England joined the “Grand Alliance” against Louis XIV. In North America, the French had a long-standing alliance with the Huron tribe of Indians who supplied them with beaver pelts taken from interior waters. The Iroquois Indians attacked the Hurons, forcing the French to take sides. Dutch fur traders stationed in New Amsterdam, and later the British, sided with the Iroquois, who promised to divert the fur trade from the St. Lawrence river to the Hudson. Thus, the European powers were drawn into conflict for trade dominance. This struggle was resolved a century later with the English general Wolfe’s capture of Quebec in 1759 which ended the “French and Indian War”. When England’s American colonies broke away from their mother country in the mid 1770s, the British and their Iroquois allies fought the colonial rebels then aided by the French. George Washington, an English officer in their earlier conflict, was commander in chief of the American colonials. After this war was concluded, Canada remained a British possession with a sizable French-speaking minority in the Quebec province.
Both England and France had trading companies in India. As the Mogul empire weakened during the 18th century, these companies formed strategic alliances with Indian princes and became militarily engaged. Robert Clive’s victories over the French and Dutch between 1748 and 1760 put England in a dominant position. The British East India Company took over the administration of provincial governments in northern India on behalf of the Mogul empire. Its representatives grew rich from exercise of their official duties. To combat corruption, the British Parliament assumed joint control of the Indian government in 1774, ruling through a series of governor generals. Public administration was actually an unprofitable undertaking for the East India Company. Its money was made in the tea trade. A publicity campaign in the 1720s had persuaded the English public to drink tea rather than coffee. Whenever the British government needed money, it raised the tax on tea. This strategy backfired when in 1774 a band of American colonials disguised as Indians dumped a shipload of tea into the Boston Harbor to protest the increased taxes. The British retaliated, bringing on the American war of independence.
During colonial times, North America participated in a highly profitable three-cornered trade. Ships from England carrying textiles, beads, and metal wares sailed first to the coast of west Africa, where they exchanged these goods for human slaves. The ships then sailed across the Atlantic ocean to the Caribbean islands, Brazil, or England’s North American colonies, which needed the African slaves for plantation laborers. In the Americas, merchants purchased such products as sugar, coffee, and tobacco from the plantations, as well as timber and fish. Those commodities went back to England to complete the cycle. The West Indian products, especially rum distilled from the sugar, had greater commercial appeal for Europeans during the late 18th century than the traditional eastern luxuries of spices and silk. There was also an assured demand for slaves in the New World. The key to this trade was finding items which the slave-hunting chieftains of west Africa would accept in exchange. Each chieftain preferred certain products.
Later, leaders of the Dahomey and Ashanti tribes from the interior of west Africa offered to furnish the white merchants an unlimited number of slaves in exchange for firearms which they could use to subdue their rivals. They organized regular manhunts nabbing a greatly increased number of captives. Portuguese navigators had begun the slave trade in the 15th century, and the Dutch had expanded it. The English brought this trade to a peak. An estimated 900,000 slaves were shipped from Africa to America in the 16th century. That number rose to 1.7 million slaves in the 17th century, and to over 7 million in the 18th century, before dropping off in the 19th century. Slavery was abolished in the British West Indies in 1833; in the United States, in 1865, following the U.S. Civil War; and in Brazil, in 1888. Most black African slaves were brought to islands in the West Indies and to Brazil; less than a million went to the United States. Many others died during the Atlantic passage.
Liverpool was the center of the English slave trade as well as trade in manufactured products. Over 300,000 slaves were brought across the ocean on ships sailing from that port between 1783 and 1793. However, the cargo carried back to England had changed. Instead of coffee, bales of cotton were starting to appear. There was not enough land in the West Indies to grow this crop, so cotton began to be cultivated in the southern part of the United States. Unfortunately, the variety of cotton which grew well in that region was difficult to gin. Eli Whitney’s cotton gin, invented in 1793, solved this problem. Another problem was the lack of labor to weave cotton cloth. Again, mechanical inventions came to the rescue, including Hargreaves’ spinning jenny and Crompton’s “mule” machine, both powered by steam engines. Cheap textiles made of cotton replaced woolen goods as England’s leading export product. In India, once an exporter of cotton textiles, machine-produced goods imported from England underbid the local product and threw millions of weavers out of work. Much of England’s wool business moved to Australia. Twenty-nine merino sheep shipped there in 1787 multiplied into today’s herds.
Trade Competition in the Industrial Age
The Industrial Revolution resulted from technologies that came from the natural sciences. According to Arnold Toynbee, the religious wars which raged in Europe during the 17th century were so hateful and intense that men of intelligence turned their attention instead to the natural world. While theological questions were clearly divisive, men could be friends as fellow students of nature. “Before the close of the (17th) century,” wrote Toynbee, “Religion had been replaced by Technology ... as the paramount interest and pursuit of the leading spirits in the Western society.” The Royal Society of London, proposed first by Sir Francis Bacon in The New Atlantis, was established in 1660 by a group of men who, tired of religious controversies that had led to the English Civil War, wished instead to discuss the physical world. Earlier in the century, Galileo had built a telescope which allowed men to study celestial objects. A Dutch lens grinder, Anton van Leeuwenhoek, used the microscope to observe cell tissues, bacteria, and other minute objects. Sir Isaac Newton, a president of the Royal Society, worked out mathematical equations describing fundamental relationships that underlay gravitation, optics, and physical motion.
It took about a century for this interest in science to be translated into technological improvements affecting daily life. A system of crop rotation introduced from Holland in the late 17th century helped to increase wheat yields in England and so supply foods that would keep livestock alive during the winter. Starting in the 1760s, inland canals began to be used to transport coal from mines to industrial centers such as Manchester. James Brindley of Staffordshire designed and built almost four hundred miles of canals. A new generation of ironmongers built iron bridges across rivers. Men such as Joseph Priestley, Josiah Wedgwood, and Benjamin Franklin conducted scientific experiments with gases, metals, ceramics, and electricity which had practical applications. The most important technological advancement may have been James Watt’s invention of the steam-driven engine, which was installed in an English cotton mill in 1785. A steam engine was attached to a boat in 1802, and to a railroad locomotive in 1804. Steamboats were in use both in England and the United States by the first decade of the 19th century. The age of railroads began in the 1820s.
What is called the Industrial Revolution started in the English cotton mills. Production techniques were improved not only by Watt’s steam engine but by a host of other mechanical inventions including the spinning jenny, spinning frame, mule, and power loom. Samuel Slater’s theft of English technology brought this industry to America in 1790. The use of special machines to weave cotton and produce cotton cloth allowed much more cloth to be produced in an hour of operation than before. Cloth produced this way could be sold for lower prices than cloth produced in the traditional manner. Business shifted to the new mode of production. Textiles factories required people to tend the machines. Some came from villages whose cottage industries and local handicrafts were meanwhile being ruined by competition from factory-made products. Others came from the farm. The development of commercial law made it possible for business managers to have legally enforceable contracts to buy and sell various commodities including labor. Because the factory system was based on commercial contracts, its enterprises did not employ slave labor but workers whose had agreed to sell their effort and skill for a certain time in exchange for wages.
Most nations, including the United States, adopted a policy of protecting their infant manufacturing industries by high tariffs, even at the risk of killing their shipping business and foreign trade. Napoleon Bonaparte tried to choke British commerce with the “Continental System”, which prohibited trade with England in any lands under his control. The effect was to deprive European consumers of the coffee, sugar, and tobacco which the British had once supplied and to ruin French agriculture. Napoleon was forced to grant numerous “licenses”, excepting one or another special situation. Trade was resumed after the war; however, Europe was impoverished from its devastation and had become used to making do with homegrown products. That situation bred policies of trade protection. In 1815, English landed gentry persuaded Parliament to ban wheat imports when the price was below a certain level. Wheat prices rose in England, increasing hunger in its cities. They fell in France, where the farmers suffered. As protectionism spread through Europe, trade and employment plummeted. Although the French Revolution had established the individual right to choose a career, this meant little to people working in a depressed economic environment.
British manufacturers, who enjoyed a comparative advantage in most kinds of products, supported a campaign for “free trade” which would open foreign markets to them at the cost of accepting more competition from imported goods. Richard Cobden, a former cotton merchant from Manchester, led the fight for repeal of the corn laws in 1846. He persuaded the French emperor Napoleon III to enter into an agreement with Great Britain for a mutual reduction of tariffs in 1860. Two years later, France signed a similar agreement with Prussia. The volume of world trade expanded enormously. In contrast with previous periods, this trade included foodstuffs, steel, and other necessities, not just luxury goods. In the Far East, the campaign to liberalize trade was accompanied by military force. Admiral Matthew Perry of the United States opened Japan to foreign commerce and cultural influence after centuries of isolation. Recognizing their own backwardness, the Japanese eagerly modernized their society along western lines. British gunboats forced China to accept opium imports from India after the Chinese government banned this narcotic in 1839. The reason was that the East India Company needed a product to trade for Chinese tea.
During the 19th century, the new industrial order of Europe and North America marched triumphantly over the rest of the world. The old plantation system was in retreat; slavery was finished. The large Jesuit plantations in South America which had produced agricultural exports with Indian labor had been swept away in the previous century. The King of France banned the Jesuit Order in 1764 when the Society was unable to pay its debts from speculative ventures in Martinique. In the late 1860s, cotton plantations in the southern United States fell into ruin as the Confederacy met with military defeat. American wheat farmers, cultivating large acreages with mechanized harvesting equipment, shipped huge quantities of grain by railroad to urban markets in their own country and abroad. New mineral discoveries were made in remote parts of the earth. A transportation network consisting first of inland canals and then railroads allowed inexpensive shipment of commodities from their point of production to distant markets. Electric telegraph lines communicated information instantaneously. New alloys and processes of production improved the cost and quality of steel.
Giant corporations arose to produce newly invented products. Capitalizing on his ties to the railroads, a Scottish immigrant to the United States, Andrew Carnegie, acquired one quarter of this nation’s steel-production capacity by paying close attention to quality and cost and by busting unions. He imported from England the Bessemer process of producing steel from cast iron. John D. Rockefeller created the Standard Oil trust through mergers, efficient production, and aggressive moves against business competitors. In Germany, firms such as Bayer, BASF, and Hoechst captured much of the world’s market for artificial dyes. Synthetic drugs such as aspirin and materials such as celluloid were other products of chemical research. Thomas Edison’s laboratories in Menlo Park and East Orange, New Jersey, invented and developed a variety of products that used electricity. An American tinkerer and race-car driver, Henry Ford, helped to found an automobile-manufacturing firm that bears his name. He is credited with inventing the factory assembly line. His “Model T” Ford offered a reliable product which people could afford. Henry Ford also built up the market for automobiles by paying his workers high wages and scheduling shorter hours of work. In the process, he became one of the richest men on earth.
The Labor Movement
The Industrial Revolution, which began in England during the late 18th century, increased production efficiency and wealth but also increased wealth inequality. Cheap factory-produced goods undercut the market for goods less efficiently produced in handicraft industries. The enclosure and privatization of once public lands by several acts of Parliament closed off the opportunity that rural people might eke out a living in such places. So a multitude of persons, having nothing to sell but their labor, migrated from the countryside into industrial cities. In theory, the new system of contractual labor respected the workers’ freedom and dignity. In practice, individual workers were at a disadvantage in bargaining with their employer. Given a lack of alternative employment in the villages, employers could pick and choose among job applicants, play one off against another, and, if necessary, blacklist uncooperative individuals. The dynamics of increasing production efficiency meant that fewer workers were needed to tend the machines so unemployment tended to increase. Employers had a financial incentive to pay workers as little as possible and extract from them a maximum amount of work. This led to an upward spiral in scheduled work hours. Around 1800, people customarily worked 14 hours a day in the factories, and sometimes longer.
A possible remedy for this intolerable situation was for several workers to bargain jointly with their employer so as to obtain more favorable contracts. However, the British Parliament passed a law in 1799 forbidding such “combinations” which were intended to raise wages or prices in restraint of trade. Effectively, some workers could undercut their fellow employees by agreeing privately to a lower wage. It was necessary for workers to communicate with one another, if only in secret. The earliest workers’ organizations were therefore secret societies which, being illegal, sometimes resorted to violence. Parliament legalized labor unions in 1824-25 so that collective bargaining could take place in the open. Labor issues also became part of reform legislation enacted during this period. The Factory Act of 1833, introduced in Parliament by the Earl of Shaftesbury, limited the hours that children of various ages were allowed to work. A universal 10-hour bill was passed by the British Parliament in 1848. Factory workers were becoming an increasingly active economic and political force in society. Worker agitation built up to a peak in 1848, when the English Chartists pushed their program of universal male suffrage and other reforms.
Perhaps the most important person in the early labor movement in Great Britain was not a worker but Robert Owen, owner of a textile factory. In 1800, Owen bought his father-in-law’s cotton mills at New Lanark, Scotland, which he managed for 29 years. There Owen created a model industrial community where the firm’s 2,500 employees enjoyed superior housing and sanitation, stores with fair prices, and free schooling. The work day at New Lanark was 10 1/2 hours compared with 13 or 14 hours in rival mills. Owen was the principal promoter of the Factory Act of 1819, which limited working hours for women and children. He supported trade unions and agricultural-industrial coops and agitated for a universal eight-hour day. In later years, Owen established a utopian community at New Harmony, Indiana, which used labor rather than gold as a medium of economic exchange. The idealism of Owen and others fed a current of labor activity which appealed to intellectuals and ultimately became the socialist movement.
Another force was the trade-union movement itself. After labor unions became legal in 1824, British workers rapidly organized, especially in the mining and textiles industries. The Trade Union Congress was formed in 1868 to coordinate policies on a national scale. Trade unions also arose in Germany and other nations of continental Europe after the revolution of 1848, as well as in North America. Initially, labor agitation was focused upon reducing the length of the working day. In 1825, carpenters in Boston struck unsuccessfully for a ten-hour day. However, a general strike in 1835 over the same issue persuaded the Philadelphia city government to adopt a “6 to 6” daily schedule, including two hours off for lunch. In 1840, President Martin Van Buren signed an order granting all mechanics and laborers in the executive branch of the federal government a uniform ten-hour day. Another burst of activity took place in the United States around the time of the Civil War when a national movement to promote the eight-hour day headed by a Boston machinist, Ira Steward, achieved several legislative victories. These proved to be hollow. The fight for the eight-hour day continued throughout the 1870s. In the summer of 1872, more than 100,000 building-trades workers in New York City struck for three months to win this concession.
On May 1, 1886, U.S. and Canadian unions conducted a general strike in support of the eight-hour day in several large cities. An estimated 350,000 workers participated in this “May Day” strike. It is best remembered for the bombing which occurred in Chicago’s Haymarket Square three days after the strike and for the trial in which four labor leaders were convicted of inciting violence and sentenced to death. The American Federation of Labor was organized in Ohio later that year. This organization made plans for another strike on May 1, 1890. When delegates to a conference of the Second International in Paris heard of those plans, they endorsed the event. European trade unions, supported by socialists, staged a general strike for the eight-hour day on the same day as the North Americans’ strike, thus turning “May Day” into an international labor holiday. The socialists were a group of political agitators who advocated that labor-friendly governments seize control of productive enterprise. The American Federation of Labor rejected this sweeping program and thereafter confined themselves largely to bargaining with employers to their members’ best economic advantage.
The eight-hour day came to pass in most industrialized countries around the time of World War I. Its standard is embodied in the first convention of the International Labor Organization, adopted in 1919. Two years earlier, Marxist socialists had seized control of the Russian state. That brought labor questions to the forefront of world politics. When Russian communism brought other peoples into its political orbit after World War II, the earth’s nations became divided into a socialist and capitalist camp, each governed by a quasi-religious economic philosophy. Karl Marx and his associates had founded the International Workingmen’s Association, later known as the “First International”, in London in 1864. The “Second International”, founded after Marx’s death, included the leaders of most European socialist parties in the period between 1890 and 1919. Then, with the triumph of Bolshevism, the “Third International” was created to serve the ideological objectives of the Soviet state. The overthrow of the Stalinist empire in the former Soviet Union and eastern Europe represented a major setback to those ideals. Meanwhile, the western labor movement has been weakened by the disparity of incomes between union and nonunion workers and the globalization of labor competition.
Education
It may be that the western labor movement is a victim of its own success. Its purpose lay in overcoming economic disadvantage. As the unions succeeded in increasing wages and reducing hours, however, the working class graduated into a more comfortable condition of life. Its new middle-class status raised expectations that the workers’ children were in a position to seize even greater opportunities for advancement in society. The gateway to this multigenerational improvement was education. Education cultivated the mind, rendering it fit for glorious achievements. Persons trained at the universities might become lawyers, doctors, or prime ministers though their parents were poor. It was so much more satisfying to become associated with such an institution, which promised success through individual effort and intelligence, than to belong to an organization representing people who worked with their backs and hands and, if contract talks were successful, won pay increases regardless of merit. So the prospering trade unionists abandoned their heritage built on claims of disadvantage and went for the brighter future that education held forth.
Western education is rooted in medieval institutions associated loosely with the church. Theological training, along with studies in medicine, law, and the liberal arts, formed the core of the curriculum at the University of Paris. A strong humanist tradition developed with exposure to classical Greek and Roman writings. After 1500, Europeans learned to imitate the styles of Latin authors from the period between Cicero and Augustus. The Protestant Reformation, led by religious scholars, viewed education as a means of acquiring direct knowledge of Christian teachings. Indoctrination in religion was the spiritual equivalent of military training for Catholics and Protestants alike. Mistrusting the educated poor, the princes of Europe later tried to take control of the schools and turn them into devices to attract clever young men for service to society. H.G. Wells observed that “the university was part of the recognized machinery of aristocracy ... A pompous and unintelligent classical pretentiousness dominated them ... The only knowledge recognized was an uncritical textual knowledge of a selection of Latin and Greek classics, and the test of a good style was its abundance of quotations, allusions, and stereotyped expressions ... Such a training ... showed the world reflected in a distorting mirror of bad historical analogies.”
Prussia responded to the challenge of defeat by Napoleon’s armies in numerous ways, including reorganization of its schools. University education was improved, and the gymnasium became the center of training for a social elite. Applied science was added to the curriculum. Thanks to its academic institutions, Germany became a leader in chemical technologies. When Prince Albert of Saxe-Coburg and Gotha married England’s Queen Victoria, he took pains to warn his adopted country of its educational deficiencies. He initiated the university commission of 1850 and, a year later, the first International Exhibition at Hyde Park in London, whose purpose was to show the English what other European nations had accomplished artistically and industrially. Anglo-German rivalry during the second half of the 19th century prompted much soul searching among British educators, especially when the Germans began to compete with Britain for naval power. National competitiveness dictated more rigorous training in the natural sciences. The British public began to see the need for popular education, now that steam power had reduced the demand for persons who worked with their muscles and increased the demand for workers who exercised judgment and skill.
The irony was that Great Britain and France, whose educational systems stressed literature and classical studies, had led the way in making scientific discoveries and developing useful technologies based on this knowledge. The great pioneers of experimental science were, for the most part, persons without much education. Neither Kepler nor Descartes were affiliated with a university. Benjamin Franklin, Michael Faraday, and Thomas Edison were largely self-taught. James Watt lacked a university education although he did consult with Joseph Black, a professor of chemistry at Glasgow. Joseph Priestley went to divinity school. Yet, the German example of academic training in the sciences made a deep impression upon the British public. At first, the idea of popular education was quite humorous to Britain’s educated elite. Wells reports that “in the middle Victorian period it was thought to be extraordinarily funny that a shop assistant should lean across the counter and ask two lady customers not to speak French, as he ‘understood the langwidge’ ... The German competitor later on robbed that joke of its fun. Before the death of Queen Victoria, English shop assistants were being badgered to attend evening classes to learn French.”
A western university education had another purpose which, in the long run, had enormous historical impact. With the rise of western science and especially military science, the balance of political power in the world shifted decisively toward the west. Nonwestern leaders realized that, for the sake of self-preservation, their nations needed to modernize along western lines. Specifically, these nations needed to acquire the weapons technology to defend themselves against western aggression. In some cases, they employed westerners as military advisers. Moroccan kings defeated an invading Portuguese army in the 16th century with the help of western weapons technology and recruits. Ranjit Singh in the 19th century employed veterans of Napoleon’s army as instructors to fight the British in India. Eventually, however, these nonwestern regimes found that mere possession of technology was not enough. To use it effectively, they needed disciplined troops, good hygiene, adequate public finances, supporting industrial facilities, and other features of western society. They needed a wholesale adoption of western culture. What they did not want, however, was western religion because conversion to Christianity would mean the loss of their own spiritual identity.
Some nonwestern governments decided to modernize completely. Peter the Great of Russia (1682-1725) is an example of this policy. The young czar traveled to the west and even worked as a carpenter at a shipyard in the Netherlands to gain experience of western methods before returning to his country and embarking upon a program of modernization. Others, in the 19th century, include the Ottoman emperor Mahmud II, King Mongkut of Thailand, and the Japanese reformers of the Meiji restoration. The most common way for nonwestern nations to acquire knowledge of western ways was to send young men to be educated in Europe. A barrier was that most western universities had religious qualifications for students. Until 1871, for example, Oxford University required each candidate for a degree to declare personal acceptance of the Thirty-Nine Articles promulgated by the Church of England. An exception was the University of Padua, located in Venetian territory, which allowed non-Catholics to be admitted. It became a favorite of Greek students, both in Venice and the Ottoman empire. As religious tolerance spread in Europe, more universities dropped their requirement of assent to the locally accepted version of Christianity, and a western education became more attractive to foreign students. This happened just as western technology was demonstrating its superiority.
What had started slowly became a torrent in the 19th century. A new class of western-educated natives of nonwestern societies appeared in nations around the earth. The name, “intelligentsia”, has been given to this group in Russia. Living between two worlds, such individuals became an interface between westerners and their own society. Many took high government positions where they carried out modernization programs. While the nonwestern governments had generally supported this class, the intelligentsia sometimes pursued its own agenda. Westernized Greeks under Prince Ypsilanti rebelled against the Ottoman empire in 1821. A corps of Russian officers conspired against Czar Alexander I in 1825. Both rebellions were crushed. Foreign students in Europe tended to pick up on intellectual and cultural fashions of the day. The Marxist ideology appealed to European intellectuals during the late 19th and early 20th centuries. Chou En-lai and Ho Chi Minh began their revolutionary activities in Paris just after World War I. Sun Yat-sen was educated in Honolulu. Mohandas Gandhi studied in London. Nehru attended Harrow and Cambridge University. Kwame Nkrumah of Ghana attended Lincoln University in the United States. It is fair to say that the anticolonialist movements of the mid 20th century were products both of nationalism and a western education.
The original lure of western culture had been its technology, especially fire arms and artillery. German-style training in science spurred British education, as, a century later, the Soviet launching of Sputnik inspired increased appropriations for American colleges and schools on the theory that the Russians had more rigorous training in science. In the 1930s and 1940s, it did seem that academic science was the key to technological advance. The atom bomb and electronic computers were first developed in that milieu. Yet, ultimately, U.S. education has been more concerned with building a homogeneous society. It has taken an immigrant population and taught them to be Americans. It has molded farm populations, veterans returning from war, racial minorities, and other irregular types into persons who could live in the cities and hold professional jobs. This function is not unlike that of teaching foreigners how to cope with western society. All the power and wealth that this society has to offer is made available to the one trained to make the right approach.
National Histories
A political dream in Europe has been to duplicate what was happening in China at the other end of the Eurasian continent. Once the first Ch’in emperor had created a unified empire, it never really came apart. Sometimes a dynasty would fall, barbarians would intrude, and the empire might be split into several kingdoms; but, inevitably, the Chinese empire would be reconstructed. In western Europe, on the other hand, the political fragmentation that began with the Germanic invasions of the Roman empire in the 5th century A.D. has persisted into modern times. Political empires such as Charlemagne’s which comprised the bulk of western Europe proved ephemeral. The Roman Popes tried to unify Europe in a quasi-political religious empire, but this enterprise was doomed by the Great Schism of the 14th century and the Protestant Reformation. Ironically, the empire which European monarchs sought to achieve in Europe was collectively built on a global scale during the third epoch of world history. This, too, has come apart in the 20th century.
CivIII political history is characterized by a plurality of nations rather than by unified empires. Its “prophet” was an Italian writer and one-time political advisor named Niccolò Machiavelli who had advised the political ruler of Florence for ten years until the Medici family took charge in 1512. His book, The Prince, published posthumously in 1532, tells what he had learned during that time. Machiavelli denied that the object of statecraft was to advance Christian ideals or build a better society. Politics, as it was actually practiced, was about getting and keeping power. Therefore, heads of state ought to pursue their political self-interest without reservation or remorse. Machiavelli also recommended the “balance of power” strategy which guided European diplomacy in centuries to come. This strategy dictated that parochial princes combine to oppose any other prince who became too powerful. For example, Lorenzo de’ Medici kept the peace by aligning the power of Florence and Milan against Venice and Naples. In the military struggle between Emperor Charles V and Francis I of France during the 1520s, Charles was supported by the Pope and England’s Henry VIII; Francis, by the Ottoman Turks. Once Charles appeared to be winning, Henry and the Pope abruptly switched sides and supported Francis.
The Carolingian dynasty of Charlemagne and his heirs established the geographical framework within which the European nation states emerged. In 843, Charlemagne’s Frankish empire was divided between his three grandsons. Charles the Bald received most of present-day France. Lewis the German received the eastern German territories. The middle strip of land running from Belgium through Italy was assigned to Lothaire, who was also named Holy Roman Emperor. During the 10th century, the German territories and most of Italy were united in the empire of Otto the Great. France was then comprised of a royal domain surrounding Paris and several large fiefdoms, including Burgundy, Normandy, Brittany, and Aquitaine. The Normans conquered lands in southern Italy and Sicily during the 11th century to establish the Kingdom of Two Sicilies. In 1066, Duke William of Normandy defeated the English king, Harold, at the Battle of Hastings. His Norman dynasty unified England. The English and Burgundians together battled the French dynasty of Hugh Capet. In its darkest hour, a peasant girl, Joan of Arc, saved France from destruction, expelling the English. King Louis XI of France (1461-83) later brought Burgundy under control. From that time forth, France was a strong and united monarchy.
At the end of the 15th century, the marriage of Ferdinand and Isabella united the Iberian kingdoms of Castile and Aragon which for centuries had been pushing the Moors back toward North Africa. This task was completed in 1492. Their grandson, Charles V, was also the grandson of the Austrian Habsburg emperor, Maximilian I. Upon Maximilian’s death in 1519, Charles became sole heir to a European empire which included Spain, Austria, Hungary, Bohemia, the Netherlands, the Kingdom of Two Sicilies, and, of course, Spain’s American possessions. In addition, as Holy Roman Emperor, he indirectly controlled the German states of Central Europe. The French king, Francis I, was Charles’ principal rival. Seemingly a child of destiny, Charles had the misfortune to be a Catholic monarch and emperor at the time of the Protestant Reformation. He faced open rebellion from German princes who supported Luther. War broke out in 1546 between Protestant and Catholic forces, and again in 1552. Charles chose to retire to a monastery. Between 1554 and 1556, he turned over his possessions in Italy, Sicily, the Netherlands, and Spain to his son Philip. The Habsburg possessions in Central Europe he bequeathed to his brother Ferdinand. Charles died two years later.
Religious warfare continued into the next century. England had turned Protestant as a result of Henry VIII’s quarrel with the Pope over a divorce. The Thirty Year’s War, which began in 1618, devastated central Europe. It began with resistance to the Catholic Habsburg king Ferdinand II by Protestant Bohemian princes, and grew to include Denmark, Sweden, France, Spain, and most German states. This war pitted Catholics against Protestants and much of Europe (including France) against the Habsburg dynasty. Rivalry between the two Catholic superpowers, France and Austria, continued into the next century. Catholic Spain, which had been united with Portugal in 1580, made a brief bid to conquer its European neighbors under Philip II. However, its power was eroded by the Dutch Civil War of 1567-1648, disastrous naval expeditions against England and Holland, and the revolt of Portugal and Catalonia. In the late 17th century, it was France’s turn, having achieved cultural ascendancy in Europe, to seek corresponding political dominance. Louis XIV, the Sun King, engaged in a series of aggressive wars against his neighbors to the east, but was effectively opposed by the Dutch, Swedes, Spanish, and English. Europe’s most populous nation, France was weakened by expulsion of its industrious Protestant minority.
Later, the English and French struggled for dominance. The British evicted the French from North America between 1690 and 1763, and from India between 1746 and 1761. Having survived several Turkish sieges of Vienna, the Habsburg Austrian dynasty recovered Hungary at the expense of the Ottoman empire. It inherited Spanish territories in the southern Netherlands and Lombardy. Leopold I persuaded Serbia to join the Habsburg empire by offering religious freedom. Czar Peter the Great defeated Sweden between 1700 and 1722, annexing territories along the Baltic sea. The Russians conquered Belorussia and much of the Ukraine from the Ottoman empire under Catherine the Great. Then the rising power of Brandenburg-Prussia clashed with the Austrians, Russians, and French. Poland was partitioned between Prussia, Russia, and Austria. The French revolution brought Napoleon to power. His principal opponents were the Germans, Russians, and British. Napoleon’s empire encompassed most of continental Europe, but his ill-fated Russian invasion sapped its military strength. By this time, religious motives no longer played a significant role in European warfare. Wars were fought primarily for political and commercial advantage. Instead of involving the civilian populations, the combatants employed uniformed soldiers who furnished their own supplies and fought within limits.
Napoleon’s conquests enkindled nationalistic feeling among the German and Italian people. Italy was united as a nation under the monarchy of Victor Emmanuel II in the period between 1859 and 1870. Prince Otto von Bismarck was instrumental in uniting the German states under Prussian rule in the period between 1866 and 1871. Germany established itself as the strongest military power in Europe by defeating France in the Franco-Prussian War of 1870-71. France and England meanwhile added new colonial possessions in Africa and acquired trade enclaves in China. In Asia, the westernized Japanese empire defeated Manchu China in 1894-95, accelerating its partition by western powers, and then defeated Russia in 1904-05. The United States of America, which had consolidated its North American territories during the first half of the 19th century, evicted Spain from Cuba and the Philippines at the end of that century. The commercial rivalry between Great Britain and Germany was accompanied by formation of military alliances between France, Russia, and Britain. Still, the balance-of-power diplomacy enunciated by Machiavelli remained in effect. Then, a decade and a half into the new century, a bloody “world war” broke out in Europe itself. The civilization self-destructed.
World War I had an evident impact on the European political landscape. Four powerful monarchies disappeared after the war’s conclusion. All the monarchies belonging to the Central Powers disappeared: Kaiser Germany, the Habsburg Austro-Hungarian empire, and the Ottoman Turkish empire. In addition, Czarist Russia was replaced by the Soviet Union after, first, the Kerensky provisional government and, then, the Bolshevists seized power. The Americans, who had intervened on the winning side, carried with them an aura of the future. Democracy had triumphed over the old system of European monarchies. Several of the western monarchies which had participated in the war on both sides - the Habsburg dynasty excluded - were genealogically related, mostly through minor German nobility. Germany’s Kaiser Wilhelm and King George V of England were both grandsons of Queen Victoria. The Kaiser imagined that the war might be settled by a polite conversation with his royal cousins, the British king and the Russian czar. It was not to be. The war swept away not only his government but a dream held by Europeans since Charlemagne. This last, loose empire of European monarchies was swept away.
Democracy and Revolution
The political situation at the end of the third epoch was the opposite of that which it had been at the beginning. In the early 16th century, three strong monarchs - Charles V of Germany and Spain, Francis I of France, and Henry VIII of England - bestrode western Europe. Each claimed title to his throne by virtue of a legitimate inheritance sanctioned by God. With the decline of the Papacy, the temporal ruler possessed nearly absolute power within his realm. He even had the power to choose his subjects’ religion. At the end of the epoch, the institution of monarchy appeared to be dying. Democratic government (or pseudo-democratic dictatorship) was taking its place. The political rulers were persons selected by the will of the people rather than divine favor. Government, which had arisen as a coercive institution in CivI, took on characteristics of the marketplace. It replaced autocratic rule with a softer regime whose power is based on the ability to sell itself to the public.
The transition from absolute monarchy to democratic government is another part of the story pertaining to CivIII. The Magna Carta, which King John I had signed to placate rebellious noblemen in 1215, started the trend toward a system of government which was accountable to the people. To build their lavish palaces and wage dynastic wars, the monarchs of Europe needed large sums of money, which had to be raised by taxation or borrowings from wealthy merchants and bankers. Parliaments were formed to facilitate the tax-collecting function. At first these were assemblies of men representing the shires and counties of the realm, who testified as to the tax-producing capacity of their area. The king had to call the parliament into session to seek additional sums of money, and this body enjoyed a certain right of refusal. Parliamentary government began to impinge upon the power of the English crown during the 17th century when Cromwell’s armies defeated the Royalists and Charles I was beheaded. A similar process occurred in France in the 1790s as the Estates General, convened by Louis XVI, seized absolute power. A new government was meanwhile being created in the United States of America in which an elected President took the place of the monarch.
As nomadic invasions of settled communities were a recurring theme in the first epoch of world history, so the history of CivIII is marked by a series of political upheavals which historians call “revolutions”. After Czarist Russia and Manchu China had completed their encirclement of the nomads’ pastoral homeland in the 17th century, the threat to civilized society from external barbarians subsided and became virtually extinct. The new barbarian threat came from within. The commercially developing societies of Europe had developed disparities between certain classes of people with respect to their economic and social condition. The “lower” classes, economically oppressed and dissatisfied with their lot, became a force threatening social stability. The European empires also came in conflict with colonized peoples who were deprived of their political liberty. These two types of grievance led to a new kind of “barbarian” eruption during the third epoch. Within the heart of the European world empire, masses of dissatisfied people were challenging the society in violent ways. The hordes of political and social revolutionaries turned the civilized world upside down.
Examples of these revolutions include: the Dutch revolt against Spain in the 16th century, the English Puritan revolution of the 17th century, the American and French revolutions of the late 18th century, and the Russian and Chinese revolutions of the 20th century. (See Table 6-3.) All of them involved bloodshed incurred in the course of rebellion against the reigning political authority. All were successful in seizing power. In the case of the English Puritan and French revolutions, however, this power subsequently reverted to rulers of the previous type after the death or defeat of their leader. All revolutions except for the Chinese were directed against the institution of monarchy. In the case of the English, French, and Russian revolutions, the lawful monarch was executed. In some cases, parliamentary government replaced the monarchy; in others, dictatorship. The aspiration for religious liberty played a part in the Dutch and English revolutions. On the other hand, the French revolutionaries were anti-cleric while the Russian and Chinese Marxists were atheistic. Religious issues generally played a less prominent role in these upheavals as time went by. Economic concerns became increasingly important.
The Dutch, American, and Chinese revolutions were anticolonial movements. The Dutch revolt against Spain was driven by a desire for tolerance of Protestant religion in a Catholic empire, for constitutional government, and protection of local interests. It resulted in the creation of an independent Dutch republic. The American Revolution opposed autocratic colonial government and “taxation without representation”. Its leaders likewise established an independent republic. Both supported and advanced the interests of the merchant class. The Chinese Revolution, on the other hand, combined the anti-plutocratic themes of Marxist revolution with opposition to western influence in China, including an immediate fight against Japanese imperialism. The other three revolutions were internal uprisings in the mother country. The English Puritan revolution featured a struggle for religious liberty and advancement of parliamentary government, but also included an element of social leveling. The French Revolution brought the emerging interests of the Third Estate - business people, workers, and peasants - to bear against the feudal privileges enjoyed by the nobility and Christian clergy. The Russian revolution of October 1917 was a coup d’état in a war-weary land instigated by members of an ideologically hardened political party which advocated socialism.
To sensibilities of the previous civilization, the idea that lower-class people would violently rebel against divinely appointed governments was indeed scandalous. It was shocking when such rebellions included executing a lawful king. The Puritan regicides became the worst of criminals once the beheaded monarch’s son, Charles II, regained the British throne. Yet, the verdict of history is colored by the fact that these revolutions succeeded. Since the victors write history, their bloody and unlawful acts are mitigated by the revolutionary ideals which they embraced: The English Puritans were fighting for religious liberty. The brave soldiers of the American Revolution were struggling against unjust colonial government. The French revolutionaries were advancing the Rights of Man. It is the combination of those lofty principles with bloodshed and disorder which makes the revolutions seem morally confusing. The social riffraff, who brutally executed their superiors, were also, in a large sense, contributing to human progress.
The Unraveling of Western Colonialism
Historical epochs often reverse themes on which they began. For example, the third epoch, which began with a trio of strong European monarchs (plus Babur and Süleiman the Great), ended with the abolition of absolute monarchy in Europe. The lusty pursuit of gold and material wealth has given way to an anti-plutocratic spirit in the 19th and 20th century labor movements. This period also began with the European voyages of discovery, establishment of colonial governments, and the subjugation of native peoples. It brought racially based slavery to the New World. If the third epoch of civilization began on a theme of European dominance, one might expect that it would end on an opposite note. And so it is that in the late 20th century resentment of “western imperialism” runs strong in the nonwestern world. Five hundred years after the first African slaves were brought to the Americas, many of the slaves’ descendants curse the white society and its culture as historic oppressors of their people. This curse has unleashed a wave of racial hatred and self-hatred that reverberates in many directions.
The anti-European attitudes exhibited today are a consequence of the fact that the history of CivIII is abnormally skewed toward the European experience. White Europeans were once the conquerors and civilizers of peoples around the earth. Their acquisitive and secular culture begun in Renaissance times has become the world’s culture. Quite naturally, most of the world’s people, being non-European, will see this as something alien to themselves and become antagonistic. The Europeans conquered other people’s lands by their superior technology combined with an insatiable greed and a martial spirit built up over centuries of fighting Moslem armies. The Spanish and Portuguese, coming in the first wave of European invasion, tended to be motivated at least in part by religious conquest. Having recently defeated the Moors, they were the ones closest to the front of religious wars. As the initiative passed to the Dutch and English, the commercial element became more pronounced. These were merchants and adventurers unaccompanied by Jesuit missionaries, who sought to grow rich through trade. The commercial culture blossomed under their regime.
Force inevitably followed European explorations and trading expeditions. The European adventurers possessed muskets and cannon, laws, written language, and ocean-going ships. The spectacular conquests of Mexico and Peru were followed by colonization of sparsely populated lands elsewhere in the Americas. The British converted their tax-collection arrangements with the Mogul empire into imperial rule of the Indian subcontinent. They first cut a deal with the Sikh empire-builder, Ranjit Singh, to respect the Sutlej river as the boundary of their respective empires. A generation later, between 1845 and 1849, Britain conquered the Sikh empire in the Punjab. The western-style Russian army was able decisively to defeat the Ottoman Turks in the Russo-Turkish war of 1768-74, which prompted Sultan Selim III to undertake similar measures to modernize his armed forces. After Napoleon’s armies conquered and withdrew from Egypt, Muhammad Ali came to power in Egypt as a viceroy of the Ottoman empire. The British and French prevented his conquest of Syria and Palestine at Ottoman expense. The British later established a protectorate in the region including the Sudan. The French had controlled Algeria since 1830. Following the Opium War of 1839-42 with China, the British established a colony in Hong Kong.
The 19th century was a golden age of European nationalism. In the decades after Napoleon’s forces had fought English, Austrian, Prussian, Spanish, and Russian armies, these various peoples derived a sense of national pride from having defeated the great French general. The Germans were especially filled with this proud spirit. German musicians, philosophers, scientists, and poets achieved cultural ascendancy during this period. German literature was said to be best in the original, free of French influence. A similar attitude infected the youthful culture of the United States. Russian novelists and symphonic composers came into their own during this period, producing classics of their genre. Italian nationalism found a voice in the operas of Verdi and Puccini and a political champion in Garibaldi. Greeks fought for national independence against the Ottoman empire. Early in the century, Haitian guerrillas under the command of Toussaint L’Ouverture defeated Napoleon’s crack troops. Símon Bolívar and José de San Martín ended Spanish colonial rule in South America. An attempt to establish a French empire in Mexico during the U.S. Civil War was foiled.
Nationalism caught up with other nonwestern peoples during the 20th century. The Japanese military victory over Russia in 1905 showed Asians that a major European power could be beaten. That became even more clear after European society sacrificed the cream of its youth on the battlefield in World War I. Among Woodrow Wilson’s “Fourteen Points” at the Versailles peace conference was a statement affirming all people’s right to national self-determination. An immediate result was a resurgence of Turkish nationalism in the “Young Turk” movement led by Kemal Atatürk. Nationalists led by Sun Yat-sen attempted to build a democratic nation in China. In India, Mohandas Gandhi agitated nonviolently for an end to British rule. After Europe experienced another bloodbath in World War II, the anticolonial movement began in earnest. Great Britain and France divested their colonial empires. The Philippines received its independence from the United States, as did Indonesia from the Netherlands. India and Pakistan became self-governing nations in 1947. A year later, an independent Jewish state was established in Palestine. Communist forces ousted the Chinese nationalists. Many new African nations were created from former colonies during the 1960s. Vietnamese communists expelled French and American armies from Indochina.
Materialism and Disintegration
A new kind of philosophy was conceived during the third epoch. This “materialist” philosophy is mother of the modern social sciences. A key figure in this movement was the Scottish philosopher and historian, David Hume, who was Adam Smith’s mentor and an important economist in his own right. Hume’s philosophy, like that of other materialists, held that the human mind resembled a machine. Where Plato had taught that ideas were real and the natural world was created from them, materialistic philosophers held that ideas were the product of a physical brain. The brain carried on certain processes which explain how people think. The philosophies of Descartes, Spinoza, and Leibnitz, which are sometimes called “rationalist” philosophies, set the stage for the British empiricists’ even more devastating attack on objective ideas. For them, sense data were the primary source of knowledge. Ideas were the synthetic product of worldly experience. The wholeness of ideas disappeared.
After the embers of romanticism had cooled, 19th century Europeans pursued realism in literature and art. Gone were sentimental expressions conveying beauty and good. People wanted to see the ugly truth. And so, detailed descriptions of life in urban slums filled the novels of Charles Dickens and Victor Hugo. The French painters, Gustave Courbet and Jean François Millet, were celebrated artists of the realist school. This was the age of Karl Marx and Charles Darwin. In their schemes of creation, design came from the bottom up. When photography was invented, mindless machines were able to produce a more accurate visual representation of worldly scenes than the best artists. Inspired by the camera, painters such as Degas tried to create an image of objects as if seen from unexpected angles. Impressionist painters abandoned any attempt to depict shapely things; they instead placed colored dots upon the canvas to imitate how rays of light might strike the eye. This break with tradition led to other schools of experimental art - cubism, surrealism, Dada.
All this “modern art” came to a head in the period just before the outbreak of World War I. The public could not fathom the presumed excellence that lay in its disharmonious forms. Picasso’s grotesque pictures reminded Karl Jung of the “lacerated” thought patterns that he had observed in schizophrenic patients. The poet Yeats complained that “things fall apart; the center cannot hold.” Artistic expressions were pieced together in eclectic assemblages lacking form. Not coincidentally, crossword puzzles were invented during this period. Such games arranged words in mechanical ways without reference to meaning or expressive flow. The artist seemed to be taunting the public, challenging it to make sense of his work. This was the opposite of the beautiful art created in Renaissance times, whose objects were painted in full and round shapes. It signaled the end of an epoch whose culture many today deem synonymous with “civilization.”
A SHORT HISTORY OF
CIVILIZATION IV
A Weight Lifted from our Cultural Shoulders
The eldest daughter of England’s King George V recalled that her father had an almost pathological aversion to change. “The postwar world was to him an abomination,” she said. “He disapproved of Soviet Russia, painted fingernails, women who smoked in public, cocktails, frivolous hats, American jazz, and the growing habit of going away for weekends.”
After the mind-numbing carnage of World War I, the world’s people awoke to a different kind of experience. It had a lighter air. There was a distinct sense of frivolity about this period. Suddenly, a sense of the modern hit public consciousness. Automobiles were replacing horse-drawn carriages. The first radio stations began broadcasting. In years following the Great War, there were vaudeville shows, New Orleans jazz, chorus lines, and a new type of entertainment called the movies. Unserious in its tone, this was an age of popular culture. It was such a relief from the culture of high art that had filled the 19th century.
The fourth civilization came together in the wreckage of the previous culture when western society appeared to be committing suicide. The bloody world war, sinking of the Titanic, discordant themes in music and art, and overall pretentiousness of the old order created an urge to jump off the historical track and try something different. In Europe, the old antagonisms continued to rage. The harsh reparations imposed upon Germany by the treaty of Versailles left a bitterness that led to Hitler’s assumption of power. Mussolini and his fascist supporters bullied their parliamentary opponents into submission. Claiming that their enemies would be “swept into the dustbin of history”, the Bolshevists staged a coup d’état in Russia. Angry ideologies of various types were destroying public civility. Cultural decadence and economic crisis shook society at its roots. In America, on the other hand, people were enjoying an unprecedented level of prosperity, there was a casual disregard of laws prohibiting the use of alcoholic beverages, new dance fashions emanated from Harlem, and the movies began to talk. Americans tuned out the hostile, intellectually overpowering message of Lenin and opened themselves up instead to the lightheartedness of Bing Crosby.
Some Difficulties in Telling this History
To write the history of CivIV presents some special difficulties. By its nature, entertainment appeals in different ways to different individuals. There is such a diversity of interests that historians would be hard pressed to find a single set of events to represent a community’s experience. A sensible approach might be to select entertainment experiences on the basis of their size of audience. The history of this culture might therefore include descriptions of the most popular shows. Unlike most historical events, those of the entertainment culture would consist of staged productions experienced by people sitting in movie theaters or in front of television sets at many different locations. For example, on the evening of December 17, 1969, more Americans were tuned to the Tonight Show to watch “Tiny Tim” marry “Miss Vicky” Budinger than for any other event in the history of that show. Can, however, simultaneous tunings to a television frequency by a large, but geographically scattered audience be considered an event in the same sense that the “Woodstock” rock concert which took place on Max Asgur’s farm earlier that same year would be?
The history of entertainment should, of course, include reference to specific images and sounds to which the audiences have paid attention. Each venue has its own set of internal experiences. For instance, the history of Major League baseball might include exciting events such as Don Larsen’s “perfect game” in the 1956 World Series or the time when Babe Ruth pointed to the centerfield bleachers during the 1932 World Series with the Cubs and hit the very next pitch to that spot for a home run. Seasoned sportswriters could surely come up with a list of memorable incidents in each sport and write a history which narrates them in some way. Most would agree, however, that a history of Major League baseball should also include events related to its institutional experience. For example, Branch Rickey’s decision to add Jackie Robinson to the roster of the Brooklyn Dodgers or the change in the free-agency rule might be considered historically significant events related to the game of baseball which did not happen in the games themselves. One can see that a multi-volume history could easily be written on Major League baseball alone.
The entertainment culture includes much more than this. Some of its categories might be: drama, sports, popular music and dance, stand-up or situation comedy, exotic or pornographic exhibitions, and public demonstrations. In addition, what we consider to be “the news” is increasingly being treated as a form of entertainment. Given the eclectic nature of its content, the progress of entertainment cannot be told in a simple narration like the rise and fall of imperial dynasties. Each type, or venue, would have its own history. The idea that a society as large as that of the United States could have a focused set of entertainment experiences seems quite unrealistic. The problem is compounded by the fact that each society or nation has its own pastimes. The people living in India, Africa, the Middle East, and Latin America may not be interested in the same kinds of music, drama, or sports as those in the United States. They each have their own type of public diversion. Therefore, a world history which makes entertainment a major focus of attention may have less coherence than what people would want in such works.
World history could be presented in a collection of stories, visual images, samples of music, and memorable lines representing the highlights of popular culture. But, if history consists only of a stream of audiovisual vignettes, it may lack recognizable themes. Too much would depends on the historian’s judgment as to what is important. Photojournalists often produce a montage of images using time as the unifying element. For instance, a television documentary on events which Americans experienced in 1957 might include these images: President Eisenhower sending federal troops to Little Rock to enforce a school desegregation order; Elvis Presley singing “All Shook Up”; Mickey Mantle hitting a home run in Yankee Stadium; Soviet rockets launching Sputnik; the tail fins on that year’s models of cars.
History in that mode becomes a kind of generational experience, appealing separately to each age group. Each decade brings its own style in popular music, clothing fashions, and political leadership. It becomes the journalist’s function to characterize each period in a meaningful way and anticipate what might come next. So, the “Roaring Twenties” in America were followed by the depressed “Thirties”, the wartime “Forties”, the bland but prosperous “Fifties”, and the turbulent “Sixties” of the youth culture and anti-war rebelliousness.
A book-based history necessarily takes another approach. This book will tell the story of structures supporting entertainment as well as of the entertainment itself. Being a commercial industry, popular entertainment needs to attract enough revenues to pay for the programming. Its events can be financed directly through ticket sales, or, indirectly, through commercial advertising and associated activities such as gambling or the sale of licensed products. Such considerations would partly drive its history. Another aspect would concern the mode of presentation. Are the audiences live, or do people experience entertainment events via radio, television, motion pictures, or cassette tape? Here again, practices have changed over the years. The history presented in this chapter will focus upon entertainment in the United States, both to keep the discussion to a manageable length and to acknowledge the fact that communications technology has spread American entertainment quite broadly to other parts of the world. This more than other national cultures has become the basis of a worldwide civilization.
Amateur and Professional Sports
Athletic events have been a part of popular entertainment in all countries. Horse racing came to America with the Dutch settlement of New Amsterdam in the 17th century. In 1779, the Earl of Derby instituted a horse race on his property at Epson Downs in England, which became an annual event. A horse-racing event on Long Island in 1823 drew 60,000 spectators. This is believed to be the first large crowd to watch a sporting event in the United States. Regular racing began at Saratoga Springs in 1863 and at Churchill Downs in Kentucky in 1875. Cockfighting was also a popular sport in many American cities until the Massachusetts legislature banned it in 1836. Huge sums of money were wagered on these fights. Boxing became an organized sport when rules were imposed in the 18th century. An amphitheater owned by Jack Broughton near Tottenham Court Road in London was the center of English boxing. Rules issued there in 1743 governed the sport for more than a century. Then, in 1865, the Marquess of Queensberry issued a new set of rules, giving the contestants gloves, dividing the contest into separate rounds, and providing for a ten-second count before a downed boxer lost the bout. John L. Sullivan was the reigning world heavyweight champion fighting with bare knuckles. In 1892, he lost to James J. Corbett fighting under the Queensberry rules.
Sports such as boxing, wrestling, fencing, or archery, which involve physical violence or the use of weapons, utilize skills which once had military value. English monarchs between the 11th and 15th centuries required their subjects to practice archery. English proficiency with the long bow helped produce a string of victories over the French in the Hundred Years war. The Asian sports of judo and karate are martial arts with a different twist. Judo, derived from the Chinese Buddhist art of jiu jitsu, exercises mental and physical disciplines that were attractive to the Japanese samurai warriors. It taught the warrior how to win by yielding to superior force. Karate was developed in Okinawa by patriots resisting foreign rule. Because their Japanese conquerors forbad them to possess weapons, Okinawan patriots trained at night in techniques of manual combat that could be employed without weapons. As spectator sports, however, such contests may be too individualistic to attract large crowds. The sports which play the greatest role as entertainment tend to be team sports. Such teams are symbolically linked with communities to which the spectators may belong.
The game of football is one such sport. During the Middle Ages, residents of English villages used to play a primitive version of this game. The whole town was the playing field. An unlimited number of players on both sides would try to kick a ball between goal markers at opposite ends of the town. The sport tended to become rowdy, so King Edward II banned it in 1314. Young people continued to play football on an informal basis well into the 19th century. Then this game was picked up by English public schools such as Eaton, Harrow, and Rugby. Each school had its own version. The first attempt to develop a standard set of rules for football might have been at a conference held in 1848 at Cambridge University. Fourteen different schools were represented. A second conference in 1862 produced agreement on ten rules for football, which became the “Cambridge University Football Rules.” In October 1863, football players from the London area gathered at the Old Freemasons Tavern where they formed “The Football Association”. Association football, or “soccer”, thus became formally organized. (The word “soccer” is student slang for the letters s-o-c in “association”.) English players took this game with them to all parts of the world.
In 1823, a young football player at the Rugby School in Warwickshire, England, committed an impulsive act which forever changed the game. Against all rules, William Webb Ellis suddenly picked up the ball and began running with it down the field. Football rules then permitted only kicking and bouncing the ball. Word of Ellis’ rules infraction spread to other schools. Some students decided to try the game as it had been played “at Rugby”. The Rugby version of football, today known simply as “Rugby”, acquired its own participants and rules. In 1872, twenty-one clubs in London partial to this game formed the Rugby Union. Rugby was then an amateur sport. However, the game became popular among industrial workers in northern England. Many who were eager to play for their club could not afford to take unpaid time off from their work. Northern clubs began to pay these workers small sums of money to compensate them for their travel expenses and wage loss. When the issue of paying players came up at the annual meeting of the Rugby Union in 1893, a vote was taken and a decision made not to permit such payments. Two years later, delegates from several Rugby clubs in the North organized their own league on the basis of allowing the payments. This “Northern Union” became, in effect, a league for professional players.
The American game of football is a derivative of rugby. While the first intercollegiate football game took place between Rutgers and Princeton in 1869, these two teams were actually playing soccer, which forbad running with the ball. Rugby-style football came to the United States by way of Canada. In May 1874, athletes at Harvard University invited a team from McGill University in Montreal to play a game of football. Watching the Canadians practice, the Harvard players realized that the teams were playing two different games. So a compromise was reached. The two teams would play the first half of the game under Harvard’s soccer rules, and under McGill’s rugby rules in the second half. The Harvard players decided that they liked rugby better, so it became a part of American collegiate sports. Walter Camp, Yale’s first coach, is known as the father of American football. He developed the concept of eleven-man teams, the scrimmage line, and other features and rules of the modern game. After several college students were killed while playing this sport, President Theodore Roosevelt convened a conference for the purpose of improving safety. To relieve congestion at the line of scrimmage, a committee chaired by Walter Camp decided to allow the forward pass. Notre Dame’s Knute Rockne popularized it during an upset victory over Army in 1913.
Venerable tradition has it that Abner Doubleday, a Civil War general, invented the American game of baseball at Cooperstown, New York, in 1839. Soldiers in the Union army played this game for recreation between battles and took it home with them after the war. However, a book published in 1834 presents rules for a game similar to baseball and a woodcut illustration which shows boys playing it on Boston Commons. It seems more likely, then, that American baseball was derived from rounders, a game played by English schoolboys since medieval times. A game called “old cats” used a wooden cat in the shape of a spindle and a stick for hitting this object. As in the British game of cricket, a batter ran between two bases after hitting a pitched ball. He was “out” if an opposing player caught the ball on the bounce or in flight. In American baseball, the number of bases was increased to four. After hitting the ball, the batter ran as far as he could in a clockwise direction without being tagged. The running direction was later changed. In 1842, a group of young men began to play baseball for recreation in lower Manhattan. They became organized as the “Knickerbockers” club three years later. In 1857, a group of amateur clubs formed the National Association of Baseball Players which issued the game’s first uniform set of rules.
Basketball was a game started from scratch in the United States. Its inventor, Dr. James Naismith, was a physical-education instructor at the YMCA in Springfield, Massachusetts. He was looking for a sport which young people might play during the winter or at night. Naismith invented basketball after studying other existing sports. He determining what elements were needed and then developed a set of rules. The first basketball game was played in December 1891 in the Springfield YMCA gymnasium. Peach baskets were suspended at either end of the court. They had to be emptied by a man on a ladder each time that a “basket” was made. In 1906, open loops mounted on a backboard replaced these receptacles and the number of players on a team was reduced from seven to five. The game of basketball became an immediate sensation. By the early 1920s, it was the most popular sport in American schools. The annual state high-school basketball tournament became the biggest athletic event in town. In 1923, Robert and Helen Lynd noted in their sociological study Middletown: “More civic loyalty centers around basket-ball than any other thing.”
The sports of football, baseball, basketball, hockey, and golf, among others, have become important entertainment features on television, appealing especially to men. Because the event is unscripted, it sustains viewer suspense. The tradition of team play includes a moral dimension in suggesting vigorous camaraderie and unselfish cooperation in an endeavor identified with a larger community. These sports are regularly played in American high schools and colleges, where spectator interest is equated with “school spirit”. In professional sports, a polite myth is maintained that professional teams represent cities or regions; civic piety obliges residents of those communities to root for the home team. Professionalism came first to baseball. Teams of professional players representing certain U.S. cities formed the National League in 1876, while the American League was created in 1900. The first World Series was played in 1903. The American Professional Football Association, forerunner of the National Football League, was established in 1920, and the American Football League in 1960. The National Basketball Association was founded in 1949 through a merger of two groups.
Sports competition also takes place between teams representing nations during the International Olympic Games, held once every four years. This worldwide event revives a tradition begun in Greece during the 8th century B.C. Athletes from the Greek city-states competed then in such sports as running, jumping, throwing, and wrestling. While the games were initially held in honor of the Greeks’ ancestral gods, they became a force for pan-Hellenic cultural cohesion and peace. Their revival in 1896 A.D. was a result of efforts by the French baron Pierre de Coubertin. The baron had been impressed by the enthusiasm with which rugby was played in English public schools. He was also inspired to revive Olympic competition by the recent excavation of archeological sites near Mt. Olympus and even the controversy regarding free trade. “Let us export our oarsmen, our runners, our fencers, into other countries. This is the true Free Trade of the future,” he declared. Baron de Coubertin’s proposal to revive the Olympic games won acceptance at an international athletic congress held in Paris in 1894. The first Olympiad of the modern era took place in Athens, Greece, in April 1896.
Other Entertainment in 19th Century America
An early form of entertainment in the United States was the lecture circuit. Well-known writers, scientists, preachers, educators, and other learned persons would go on tour, lecturing on topics of interest for a fee. The lyceum movement, begun in New England in the 1820s, included more than 3,000 local groups by the mid 1830s. The English novelists Thackeray and Dickens were among the more popular attractions. Ralph Waldo Emerson, Horace Mann, Theodore Parker, and Mark Twain were American celebrities in demand. While these events appeared to be instructional, the social aspect was also important. They were a suitable form of entertainment for couples while courting. Lectures at the Cooper Union in New York City were vehicles of self-improvement for the working class. Rather less serious were the traveling shows that featured dance companies, acrobats, wax museums, singers, ventriloquists, and comedians such as Artemus Ward. Equestrian acts, combining acrobatic or circus-like features, were popular in the 1830s. Exotic animals or artifacts were exhibited in traveling circuses or in museums, sometimes pretending to serve a moral purpose but always entertaining.
The live theater, which became popular in the United States during the 1840s, was at first considered to be a questionable activity. Religious stigma against theatrical presentations went back to the 1600s. Theaters tended to be located next to billiard parlors and saloons. In addition to the main piece, the performers usually did short routines that featured women in breeches singing bawdy tunes. Italian and French ballet added to the sense of scandal by putting the female dancers in skimpy costumes with a full view of their legs. Many of the early theatrical productions were imported from England. Famous British tragedians such as Edmund Kean and Junius Brutus Booth, father of Lincoln’s assassin, regularly performed on the American stage. As it did later in Hollywood, the star system came to dominate the live theater. Edwin Forrest, a melodramatic tragedian, and Charlotte Cushman, who specialized in performing male parts in Shakespearean plays, were among the better-known American performers on tour. The American theater included stock characters such as the villainous Yankee or Davy Crockett-like frontier characters. By the end of the 19th century, most American cities had their own opera house which gave several theatrical performances each week.
In the 1890s, mechanical gadgets became an important part of the American entertainment scene. Besides popularizing Egyptian belly dancers, the World’s Columbian Exposition of 1893 included a Ferris wheel which rose 200 feet above the shore of Lake Michigan. The carnival midway made its debut at this event. In many American cities, the streetcar companies operated amusement parks on the outskirts of town. Families on weekends could ride the trolley to the park where they might enjoy a picnic together, experience roller-coaster rides, and listen to concert bands. The penny arcades were filled with mechanical novelties, many of the peepshow variety. A crank-driven “movie machine” spun photographic images on a reel of cards attached to a wooden spool, creating the illusion of motion. A fortune-telling machine purported to read a player’s palm and predict his future. Another type of machine tested mens’ arm strength and endurance. A coin-operated phonograph developed from Edison’s invention emitted recorded sounds. For a nickel, thrill-seeking patrons could experience a mild electric shock by gripping two handle bars. And, of course, the “French postcard” peepshows showed women in various stages of undress.
Racial Overtones
A theme unique to American entertainment may have been the relationship between black performers - or white performers imitating them - and predominantly white audiences. Initially, white Americans found the song and dance routines of Negro slaves to be childishly amusing. Daddy Rice’s “Jim Crow” act started the craze which blossomed into the minstrel shows. White entertainers by the score, their faces darkened with burnt cork, traveled the country exhibiting the southern Negro’s songs, dance steps, humor, and style of speech. This was America’s most popular form of entertainment for a half century. The minstrel shows were typically performed in two parts. In the first part, a dozen or so entertainers would be gathered in a semicircle, shaking tambourines. An elegantly clothed “interlocutor” standing in the middle would act as straight man to two gaudily dressed comedians, Mr. Tambo and Mr. Bones. This part of the show was a mixture of repartee, banjo playing, ballads, and dancing, smartly performed, until each performer in turn broke away from the circle, did a “walk around” followed by a jig, and then retired from the stage. The second part of the show was an assortment of monologues, comedy skits, songs, and dances, followed by a burlesque in which a male entertainer wearing a wig, brassier, and swishing skirts pretended to be a woman.
The “Virginia Minstrels”, starring Dan Emmett of “Jim Crow” fame, opened in New York City in 1843 with great success. It was followed by the “Kentucky Minstrels”, “Ethiopian Minstrels”, and other minstrel companies comprised entirely of white-male performers. Black entertainers did not perform in these shows until after the Civil War. The first all-black company was the “Plantation Minstrel Company” whose members, although black, darkened their faces and circled their lips with white or red paint to exaggerate the racial features. A similar event occurred with the “Tom shows”, which were melodramatic performances based on Harriet Beecher Stowe’s book, Uncle Tom’s Cabin. These shows included such exciting features as bloodhounds chasing Eliza across the ice and the famous whipping scene. The slave girl, Topsy, often played by female impersonators, was a highly memorable character. The scene of Little Eva going to heaven never failed to raise tears. The first performance of Uncle Tom’s Cabin, took place in Troy, New York, in 1853. While black singers were used in the chorus, whites generally took the acting parts.
Even after the minstrel shows declined in popularity, the routines performed in the “Olio” were continued in the form of vaudeville shows, which offered a complete evening’s entertainment. There were both white and black companies. The Theater Owners Booking Association was an important vaudeville circuit for black performers after World War I. Pantages, Loew’s, and the Keith Orpheum Combine booked predominantly white performers. Vaudeville was performed both on the road and in big-city theaters. The acts included everything from tap dancing, comedy, and snake charming to demonstrations by heavy-weight boxing champions. Bill “Bojangles” Robinson, the tap dancer, was the highest paid black performer. Vaudeville was an important mode of entertainment from the 1880s until the early 1930s when radio cut into their audiences. Famous radio comedians such Jack Benny and George Burns began their careers as vaudeville performers. Radio’s first hit show, “Amos ‘n Andy”, begun in 1928, continued the American tradition of expressing black people’s humor through the mouths of white entertainers. However, the television version which went on the air in 1951 featured a black cast.
Inevitably, black entertainers worked in their own productions and white imitators faded from the scene. As early as the 1820s, a black theater group, the African Company, performed Shakespearean plays in New York City. Ira Aldrich, an actor known as the “African Roscius”, toured Europe to great acclaim. The Luca family was a popular group of black singers who performed for Queen Victoria. By the turn of the century, black entertainers were appearing without blackface in their own shows. Singing comedians such as Bert Williams and George Walker starred in the so-called “coon shows” in which a well-dressed male performer flanked by an alluring female chorus would do a musical routine with a derby and twirling cane. A dance routine known as the “cakewalk” created a sensation in Europe. Female singers such as Ethel Waters, Florence Mills, and Josephine Baker, who performed at the Folies Bèrgeres, developed routines which became hugely popular with white audiences in the 1920s. The Harlem Globetrotters entertained crowds with a burlesque of basketball. More serious presentations of black cultural themes included Paul Robeson’s performance in The Emperor Jones and George Gershwin’s social opera, Porgy and Bess.
The impact of black culture on American popular entertainment may be greatest in the field of music. The Negro slaves brought certain rhythms with them from Africa which many whites found appealing. In the early 1800s, slaves used to gather at a place in New Orleans called “Congo Square” to perform their music for tourists. Black melodies were converted into popular tunes by white composers such as Steven Foster. A slave tune originally called “Old Zip Coon” became “Turkey in the Straw”. At the turn of the century, a new kind of piano music called “ragtime” incorporated a syncopated jazz beat. It was made popular by Scott Joplin, a white composer living in Missouri. Ragtime opened the door for jazz music performed by black artists such as Buddy Bolden and W.C. Handy. The Clef Club’s Syncopated Orchestra brought jazz to Carnegie Hall in 1912. From New Orleans came Louis Armstrong, who, together with Duke Ellington, Miles Davis, Charlie Parker, Count Basie, and Ella Fitzgerald, developed this into a distinctively American musical art form. Blues singers supplied vocal accompaniment. During the 1920s, the Savoy Ballroom in Harlem was known as “the Home of the Happy Feet”. The Charleston and the Lindy Hop (named after Charles Lindbergh) began there, setting off a new dance style in which couples danced apart.
The Big Band era, beginning in the late 1920s, centered on “swing music” performed in a somewhat less spontaneous way. White instrumental groups dominated this period. White crooners such as Bing Crosby and Perry Como set the tone of popular music with their laid-back style. Dancers such as Ginger Rogers and Fred Astaire projected an image of cool sophistication. Frank Sinatra became famous as a vocalist with the Tommy Dorsey band. Black musicians were being relegated to a cultural ghetto, albeit one which continued to be a source of new ideas. Bebop music challenged swing in the late 1930s. Mahalia Jackson’s Gospel recordings sold more than a million copies. Most hotel supper clubs provided opportunities for black entertainers such as Johnny Mathis, Lena Horne, and Nat King Cole to find work. Black-oriented radio stations played a peculiar kind of music that attracted many white fans. Harry Belafonte’s Calypso started a brief musical craze in the 1950s with its Caribbean beat.
Black-Flavored White Singers
From the 1930s through the mid 1950s, American popular music bore the stamp of Bing Crosby, Frank Sinatra, and other white singers who had mastered the cool style of black jazz musicians. Bing Crosby did not seem to take himself seriously as a singer. He often missed shows and, in fact, could not read a note of music. Though a family man, Crosby liked to chase chorus girls and drink booze. His intimate style of “crooning” softly into the microphone inspired a generation of imitators. He had a smooth and pleasing voice which made singing seem easy. He may have borrowed his habit of “scatting” - substituting sounds for words - from Louis Armstrong. On stage, Bing Crosby projected an image of carefree sophistication and good cheer. He played the part of a wisecracking hustler in light-comedy films co-starring Bob Hope. His breezy, easygoing way captured the spirit of America in the late jazz age. Frank Sinatra combined Crosby’s intimate style of singing with overt sex appeal. His boyish intensity excited a generation of bobbysoxers. The young Sinatra precipitated a riot in Times Square on Columbus Day, 1944. In later years, he became pals with Dean Martin and Sammy Davis Jr., making a big splash in Las Vegas. His immense popularity even survived the coming of rock ‘n roll.
Sam Phillips, a Memphis record producer, is supposed to have said in early 1953: “If I could find a white man who had the Negro sound and the Negro feel, I could make a billion dollars.” He found such a person in Elvis Presley, a young truck driver who came into Phillips’ studio later that year to make a private recording of a now forgotten song, “My Happiness.” Another song, “That’s All Right, Mama”, which Elvis recorded a year later, attracted considerable attention. It was a fast blues swing piece, like some heard on black radio stations, which had strong country influence. After performing mostly in the South for two years, Elvis broke out of this regional ghetto to become a national sensation. With his sideburns and half-smiling sneer, he presented a rebellious image to Americans of high-school age. Mobs of screaming teenage girls attended his concerts. In an age of social conformity, his stage performances included sexually suggestive hip gyrations. Therefore, when Elvis appeared on the Ed Sullivan Show in September 1956, television cameramen were instructed to show only the upper part of his body. His songs struck a chord with restless youth and changed the chemistry of race relations in America.
Elvis Presley was drafted into the U.S. Army in 1958. He married, returned to the United States to resume both a singing and acting career, became a headline performer in Las Vegas, and died in 1977 of a suspected overdose of drugs and medications. The rock ‘n roll revolution which he began resumed, after a brief interlude of experimentation with folk music, in the early 1960s. A young, fun-loving Harvard graduate then occupied the White House. The black Civil Rights movement, supported by idealistic white students, was raging in the South. Bob Dylan’s folk anthem, “The Times, They are a-Changin’”, seemed to capture the spirit of this generational and racial change. Then, suddenly, a new wave of rock music came to America from across the seas. Invited to perform on the Ed Sullivan Show, a British band called the “Beatles” arrived at the international airport in New York City greeted by newspaper reporters and a large cheering crowd. More than 70 million people witnessed their American television debut on February 9, 1964. Beatles tunes hit the top of the charts; and then songs by other English groups such as Herman’s Hermits and the Rolling Stones. The “British invasion”, bringing rock ‘n roll back to its homeland from abroad, was in full swing.
During the next five years, a culture saturated with rock ‘n roll music became intertwined with drug experimentation, racial protest, the anti-war movement, free love, and other strands of the youth culture. The Beatles were transformed from a clean-cut band which produced ballad-like hit singles into an album-producing group which experimented with marijuana and Eastern meditation. America’s youth shared in the Beatles’ personal growth, appreciating their political views, their offbeat humor and shaggy appearance, their interest in drugs, and the direction of their music. The ’60s rock culture integrated black and white music as never before. Black “Motown” performers including Diana Ross and the Supremes continued the tradition of romantic ballads while Jimi Hendrix appealed to the avant-garde. After the flower children visited San Francisco in the summer of 1967, the mood grew uglier. The anti-war protests intensified, and two political assassinations took place. There were race riots in several large cities. The Chicago police clubbed protesters at the Democratic national convention. Rock fans assembled for a colossal rally at the “Woodstock” concert in July 1969. This peaceful event, which drew one-half million spectators, was followed by another of similar size at Altamont in California which turned violent.
Rock ‘n roll music now belongs to an international culture that appeals to young people in Asia and eastern Europe as well as in Britain and the United States. In America, however, it has entered a mature phase. There are no more “hit parades” of Top 40 singles; no more Major Bowes’ amateur hours giving a Frank Sinatra his first break, or television variety shows like Ed Sullivan’s which would showcase an Elvis Presley. Recording artists today produce albums rather than singles. The corporate managers who control this music are into packaging sounds for different radio audiences and consumer types. The amplified heavy-metal sound of the 1970s had drifted far from the simple love songs of the previous decade. Punk rock was more audaciously theatrical. Video presentations entered the rock scene. The biggest-selling album in the 1980s was Michael Jackson’s “Thriller”, promoted with the help of MTV. Jackson is the grown-up version of that small black boy who once performed with the Jackson Five, now strangely vulnerable, with an innocence which still appeals to teenagers and preteens. British rock icons such as Eric Clapton and Elton John are joined by a new generation of American performer, including Madonna and Prince, who attract young, hip, racially mixed audiences.
Productions on the Broadway Stage
Traditional entertainment suggests live theater. The center of this activity in the United States has been a complex of theaters in New York City known collectively as “Broadway”. A street by that name runs the length of Manhattan. Near Times Square (42nd Street) it becomes the “Great White Way”. The tradition of the Broadway theater goes back to the 19th century. Its spirit has always been entrepreneurial rather than academic, focused on producing new hits. Yet, a type of academic, the theater critic, is on hand to observe and analytically report each production on opening night. In March 1915, the Schubert brothers, who controlled the theater market in New York City, tried to influence this process by refusing to allow a drama critic from the New York Times to enter their theaters after he had written an unfavorable review of one of their plays. The Times retaliated by refusing to publish advertisements for Schubert productions. When the Schuberts backed down a year later, it reconfirmed the critics’ independence and enhanced the newspaper’s reputation for editorial integrity.
Broadway theaters are torn between the urge to produce creative works of high quality and the need to stick with productions which appeal to basic human instincts. Plays which are too intelligent often fail from a commercial standpoint. John D. Williams, a successful theater producer in the early part of the 20th century, once said: “Intelligence and good taste are fatal to successful play producing anywhere in America because, handicapped by either of these, you are apt to produce the kind of play you think other college graduates will go to see.” Such productions would fail because “(e)very college graduate ran as fast as he could past the theaters containing these handstitched college graduate plays, put on by a college graduate. And they didn’t stop running until they landed in the front row of the ‘Follies’; failing that, they ran over to see ‘Girls, Girls, and Nothing but Girls’, ‘Oh, You Girls’ or ‘The Skidding of Tottie Coughdrop.’” Still, when an intelligent work such as Eugene O’Neill’s Beyond the Horizon developed new tragic themes or a performer such as John Barrymore brought personal magnetism to a production of Hamlet, critically acclaimed works might also enjoy box-office success. O’Neill’s genius set the stage for other serious playwrights such as Arthur Miller and Tennessee Williams.
An important element in drama, as in other types of entertainment, is the appeal of individual personality. This fact became clear when the Actors’ Equity Association, whose members were individually much better known than their bosses, struck owners of Broadway theaters in August 1919. Threatened with legal action, the striking actors took their case to the public. They did fundraising benefits and free performances in the streets and marched down Broadway waving American flags. The theater owners capitulated after thirty days. Some of the actors, organized in a group called the “Theater Guild”, put on their own performances in a rented theater during the strike. This organization became an incubator for daring and original works both by American dramatists and Europeans such as Chekhov, Ibsen, or George Bernard Shaw. Its idea was to create theater in the form of a democratic cooperative that would produce works for subscription audiences. This concept progressed to that of the “Theater Group”, a politically inspired company which celebrated America’s working class. Clifford Odets’ Waiting for Lefty, produced in 1935, dramatized a New York taxi drivers’ strike of the previous year. The methods developed here to teach acting shaped the careers of Marlon Brando, Paul Newman, Robert De Niro, and others.
Even so, Broadway theaters were in the business of entertaining customers and making money. The Ziegfeld Follies did this by presenting beautiful women in chorus lines. However, there was always room for a play which became a smashing hit by giving Americans a new look at themselves. Showboat, produced by Florenz Ziegfeld in 1927, was among the first Broadway musicals to explore race relations in a socially conscious way. Based on a novel by Edna Ferber, it employed the creative talents of Oscar Hammerstein and Jerome Kern to depict life and love on the southern stretches of the Mississippi river. A controversial musical hit of 1940, Pal Joey, starring Gene Kelly, glamorized the life of a low-class womanizer who danced his way into women’s hearts. In 1943, Oscar Hammerstein teamed up with Richard Rodgers to produce an upbeat view of life on the farm. Considered “corny” by some, Oklahoma! dazzled audiences with its western costuming, energetic dance routines, and memorable tunes. It was unapologetically optimistic about heartland America. The same creative team went on to produce such Broadway favorites as Carousel, South Pacific, and The Sound of Music, until Hammerstein died in 1960.
The Movies
During the 20th century, a new element entered American entertainment in the form of newly invented technologies to convey sensuous images. Edison’s pair of inventions, the phonograph and motion picture, captured fluent sights and sounds in a medium which allowed later retrieval by electrical machines. The earliest film production took place in the East perhaps because Edison’s studio was located there. The jump was made from short features for nickelodeons to longer productions that told a story. An eight-reel Alaskan adventure film, The Spoilers, drew 40,000 customers to New York’s Strand Theater during its first week. D.W. Griffith’s Birth of a Nation, offered two and a half hours of entertainment on twelve reels. Released in 1915, it provoked black riots in Boston by glorifying the Ku Klux Klan. On the other hand, cinema historians credit Griffith with being the first director to realize the full potential of filmmaking techniques. He greatly increased the number of shots and shifted the camera from one view to another to follow significant events in the story. It was Griffith who pioneered close-up shots and crisscrossing between simultaneous action. While creating a new art form, his works attracted a growing audience for the film industry.
In 1913, a New York vaudeville producer, Jesse L. Lasky set up an independent film company engaging Cecil B. DeMille as its creative director and Samuel Goldfish (later Goldwyn) as sales agent. Its first production was the film version of a western stage play, The Squaw Man, starring Dustin Farnum. DeMille proposed making the film in the West to take advantage of the more realistic scenery. After spending an afternoon at Edison’s studio in the Bronx to observe filmmaking techniques, he and his colleagues headed west to Flagstaff, Arizona, to shoot the 90-minute film over a period of eighteen days. The editing work was done at a rented laboratory in Hollywood. Hollywood had certain advantages over eastern locations. Its season was longer for shooting outdoor scenes, its labor costs were lower, and, most importantly for independent producers, it was far from film makers whose works were licensed by the Motion Pictures Patent Company. Members of this “trust” used legal threats and violence to discourage independent film production. Being close to the Mexican border offered independents an escape if a U.S. court attempted to shut them down. In DeMille’s case, an unidentified vandal broke into his laboratory and destroyed the negative of The Squaw Man. Fortunately, DeMille had made a second negative.
Lasky’s production company made more than twenty films during the next two years before merging with Adolph Zukor’s Famous Players. They became Paramount Pictures in 1917. Sam Goldfish and an associate, Edgar Selwyn, left Paramount to form their own studio, Goldwyn Productions, which merged with Louis B. Mayer Productions in 1922 to become MGM. A third Hollywood studio, United Artists, was created in 1919 through the efforts of a studio manager, Benjamin Schulberg, who convinced some of Paramount’s principal actors and directors to form their own company. United Artists was jointly owned by Charlie Chaplin, Douglas Fairbanks, and Mary Pickford - three of the biggest stars of the silent-film era - plus D.W. Griffith and William G. McAdoo, the general manager. It had become apparent by then that the presence of certain actors or actresses in a film was critical to box-office success. Realizing that, the film stars demanded and received higher salaries. Pickford, for instance, negotiated a contract with Paramount to do ten films for $2,000 per week plus half the profits. Charlie Chaplin signed a contract to receive $670,000 for a year’s work. The idea behind United Artists was that the stars, in performing for their own company, might keep all the profits.
Starting with Thomas Edison himself, several inventors had envisioned adding sound to motion pictures. The film studios did not push this, however, because silent films were so popular. In 1923, Lee DeForest, a pioneer of radio technology, started the Phonofilm Company to produce and market an optical recording device that would provide synchronized sound for motion pictures. Bell Laboratories developed a similar process called the Vitaphone. In 1926, the Warner Brothers studio produced an experimental film using the Vitaphone technology. Its next venture was a full-length movie with sound. Warner Brothers bought the film rights to Sam Raphaelson’s hit play, The Jazz Singer, starring George Jessel. Jessel would not come to terms so the studio signed a contract with Al Jolson, the singer on whose life the play was based, to do the film version. Jolson was a veteran vaudeville performer used to wisecracking on stage. What made the film so appealing was Jolson’s ad-libbed lines in the scene with his mother. The spontaneous conversation struck a chord with the audience, and Jolson became a national sensation. The film industry could not turn back from sound.
Motion pictures had a huge impact on popular culture. Their increasingly frequent performances cut into attendance at lodges. The cheaper theaters featured western adventure films and comedies. “Society” films were more apt to be shown in the high-class theaters. Young women went to the movies to learn how to handle problems of dating in a modern society. Al Jolson’s next film after The Jazz Singer was The Singing Fool, in which he sang “Danny Boy” to his character’s recently deceased son. Horror films also became a popular genre. The premier of Dracula, starring Bela Lugosi, on Friday, February 13, 1931, introduced a theme which appealed to Depression-era audiences. Dracula, the monster, had some human qualities that drew a sympathetic reaction from audiences. Theater operators promoted the film by telling customers to stay away and placing nurses in the theaters to revive spectators who had fainted. This winning formula led to sequels such as Frankenstein. Boris Karloff’s frightening appearance was increased by applying cadaver-like makeup to his face.
The decade of the 1930s is considered a “golden age” of film production. Talented writers, actors, and comedians from the New York stage trekked to Hollywood in search of fortune and fame. To lure financially pinched customers into the theater, MGM’s production head, Irving Thalberg, conceived the idea of putting several of his studio’s top stars into a single film in order to create an extravaganza with irresistible box-office appeal. The result was Grand Hotel, released in April 1932. The glamorous Swedish actress Greta Garbo was yoked with Joan Crawford, John and Lionel Barrymore, and others from MGM’s stable of stars in this blockbuster film. Shrewdly promoted, it earned millions of dollars and set the pattern for future all-star films. Hollywood’s best year was 1939. Its seven major studios produced a total of 341 films that year. Many were grade-B westerns, but the offering also included Dark Victory starring Bette Davis and Humphrey Bogart, John Wayne’s Stagecoach, Goodbye, Mr. Chips, and many other notable films. This was also the year of two all-time Hollywood favorites, Gone with the Wind and Wizard of Oz. Gone with the Wind, a cathartic experience for the American South, holds the all-time record for box-office receipts as adjusted for inflation. Wizard of Oz made Judy Garland a cultural icon.
Offering the cheapest form of high-quality entertainment ever devised, the film industry was riding high. However, its success attracted criticism of several kinds. First, the industry came under attack from religious groups for the “immorality” of its productions and its performers’ “decadent” lifestyles. The impetus for this criticism may have been comedian “Fatty” Arbuckle’s arraignment for manslaughter in 1921 after an actress was found dead in Arbuckle’s San Francisco hotel room following a night of orgy. To forestall Congressional action, the industry set up a committee headed by a former U.S. Postmaster General, Will Hays, to self-police its productions. The Catholic church formed a “Legion of Decency” which boycotted morally offensive films. Though ruining the career of Mae West, such measures came too late to stop Cecil B. DeMille from producing The Sign of the Cross, a 1932 thriller about Nero’s Rome, which showed Claudette Colbert’s bare breasts and a lesbian love dance. The film slyly included a moralistic message condemning such behavior. Another attack came from the U.S. Justice Department. In July 1938, its Anti-Trust Division filed suit against eight Hollywood studios and numerous executives charging restraint of trade. This move forced the studios to divest their theater holdings and change their booking arrangements.
A greater challenge to the film industry was competition from television. Annual attendance at movie theaters dropped from 80 million to 46 million in 1952 as the new technology was being introduced. Television had the unbeatable ability to deliver free entertainment into a customer’s living room, but two disadvantages: its tiny screen and lack of a colored image. The film industry counterattacked with technological innovations to demonstrate its visual superiority. The first was “Cinerama”, a technique of projecting color film onto three adjacent screens to create panoramic scenery. Audiences were taken on a realistic roller-coaster ride while seated in theaters. A year later, in 1953, the CinemaScope technique was unveiled using a single camera to produce crisp wide-screen images. Thousands of theaters were converted to this type of projection. A third innovation, 3-D films, enjoyed brief popularity. Viewers wearing paper glasses experienced the illusion of activity in three dimensions. The process of improving film sight and sound continued with systems such as Imax which were introduced during the 1970s.
Hollywood continued to prosper in the television age, thanks to video rentals, foreign distribution, product licensing, film libraries, and made-for-television features. However, the film industry was slow to enter the business of producing shows for television. That left an opening for independent producers such as Revue Productions, which was a subsidiary of Music Corporation of America (MCA), a talent agency. Agents representing the Hollywood stars originally were forbidden to enter programming because a prohibition imposed by the Screen Actors Guild. Lew Wasserman, MCA’s president, negotiated a blanket waiver from that rule with Ronald Reagan, the Guild’s president. MCA went on to become a powerhouse in the television industry, producing roughly one third of NBC’s shows in the late 1950s. This one-time talent agency purchased Paramount Pictures pre-1948 films in 1958 and Universal Picture’s Hollywood studio and adjoining lots a year later. When in 1962 it attempted to acquire Decca Records, Universal Picture’s parent company, the U.S. Department of Justice filed for a restraining order. MCA had to agree to abandon its business as a talent agency to acquire Decca and Universal Pictures. The wisdom of accepting those terms was confirmed in 1990 when Wasserman negotiated MCA’s sale to Matsushita for $6 billion.
So Hollywood has moved from the old studio system of producing films to a new system based on packaging creative talent. Success at the box office starts with the stars, and talent agencies control that resource. Successful filmmaking also requires finding the right script, the right director, cameramen, and music specialists. The person who can put all these elements together by his contacts, contracts, and negotiating skills becomes the real power in the film industry. An additional element in contemporary productions is the increasing reliance upon computer-generated special effects. George Lucas’ 1977 Star Wars started a trend toward computerized films. Steven Spielberg has become Hollywood’s most successful director with such hits as E.T. and Jurassic Park, which used computer technology. There was a renaissance in animated cartoons during the 1990s as the Disney Studio has produced The Little Mermaid, Aladdin, The Lion King, and other works appealing both to children and their parents. The creative talent which once produced a successful Broadway musical has lately gone into this kind of production.
Radio Broadcasting
Ham operators dominated radio broadcasting during the first twenty years that the technology existed. The first commercial station, KDKA, began regular broadcasts in Pittsburgh in November 1920. Initially, profits were made in selling radio receivers. As sets were sold, additional stations became licensed for commercial broadcasting. Radio Corporation of America, headed by David Sarnoff, established the first radio network, NBC, in 1926. A Philadelphia cigar manufacturer, Sam Paley, purchased United Independent Broadcasters in 1928 and gave it to his son. William Paley renamed this fledgling network Columbia Broadcasting System. The Radio Act of 1927 regulated federal licensing of stations. Commercial operators were given exclusive use of certain frequencies for broadcasting their programs. The Federal Communications Act of 1934 established an independent agency to oversee all telecommunications. The authors of Middletown describe typical radio programming in the 1920s as consisting of “a Philharmonic concert, a sermon by Dr. Fosdick, or President Coolidge bidding his father farewell on election eve.” Later, vaudeville-style comedy, play-by-play sports announcing, and dramatic presentations became staples of radio broadcasting in the United States.
In Great Britain, the Wireless Telegraphy Act of 1904 required all wireless transmitters and receivers to be licensed by the Post Office. This agency asked the radio manufacturers to organize themselves into a cartel to avoid the chaos thought to have developed in the United States from unrestricted broadcasting. The (British) Broadcasting Company, later known as BBC, was licensed to broadcast in 1922. Broadcasting was defined as a public utility. Under the BBC’s managing director, John Reith, radio was to function as “a servant of culture”, which would help to create a more unified and egalitarian society. A clause in the BBC’s license forbad it “to deal with controversial matters in its programming service.” Although that provision was softened in 1928, the BBC ran a tightly controlled operation which became regarded almost as an extension of the British government. Winston Churchill complained that politicians such as himself who were unacceptable to the party whips were denied access to radio. The BBC’s policy was “to eliminate from (news) bulletins all crimes and tragedies that have not a national or international importance.” Another policy forbad jokes about politicians, advertisements, U.S. prohibition, medical matters, and Scotsmen or Welshmen (but not Irishmen).
By the time of the 1938 Munich crisis, the BBC began to be compared unfavorably with competing models of broadcasting. The relatively wide-open news reporting in the United States gave the American public a more accurate picture of events in Europe than the government-managed radio news in Britain. In the mid 1930s, competition from two offshore commercial stations, Radio Normandie and Radio Luxembourg, forced the BBC to lighten up. For years, it had avoided “infusion of the human element” in news announcing to preserve the focus upon policy questions. That started to change in 1938. The BBC, which had not even had a news department until 1934, began to do recorded interviews and “eyewitness reports” from specially equipped cars. More entertainment features were added including the highly popular sports broadcasts. Fearing job losses, the Variety Artists Federation, fearing job losses to radio, had advised its members in 1923 not to cooperate with the BBC. Used to feedback from live audiences, comedians at first found it hard to work in radio studios. The BBC struggled to find a type of music that would appeal to diverse audiences. Light music and dance music were early staples of its musical programming. Later, it included more gramophone recordings and more vaudeville or variety features.
American radio was quicker to recycle talent from the vaudeville circuit into the new medium. It gained stature from quality reporting of the European crises by foreign correspondents such as Edward R. Murrow and William Shirer. More radio sets were sold during the three-week period when Neville Chamberlain met Hitler in Munich than in any other comparable period. The Nation said then that radio had become America’s dominant system of news communication. In October 1938, its power was confirmed when Orson Wells and his Mercury Theater company broadcast a radio play based on H.G. Wells’ The War of the Worlds. The play consisted of faked news reports about a Martian space ship landing in New Jersey. Simulated news flashes were interspersed with weird noises and then a period of prolonged silence. A mile-long traffic jam was created as panic-stricken New Yorkers tried to flee the city. Radio was an ideal medium for fast-breaking news and interviews exemplified by the famed newspaper columnist, Walter Winchell, who turned celebrity gossip into a minor industry. It showcased wisecracking comedians such as Jack Benny and Bob Hope, and presented imaginative dramas such as The Shadow or The Lone Ranger which gave it a reputation of being “theater of the mind.”
Television
During radio’s heyday in the 1930s, David Sarnoff and his RCA engineers were engaged in patent battles with Philo Farnsworth, inventor of electronic television, for rights to this technology. Sarnoff unveiled RCA’s system of commercial television at the 1939 New York World Fair only for World War II to interrupt its introduction. The Federal Communications Commission assigned the VHF (very high frequency) bands to commercial television, which supported only twelve channels nationwide. CBS pushed for delay of further development of VHF broadcasting in favor of color-television broadcasting on the much broader UHF (ultra-high frequency) band. The television industry stagnated during this period of technical uncertainly. When the FCC denied CBS’s petition in April 1947, the FCC promptly received sixty new applications for stations and sales of VHF sets increased. Only 60,000 sets were in use that year, two thirds of them in New York City. About half were owned by affluent individuals, and half by bars serving a predominantly male clientele. The bar audience preferred news and sports programming. Because of scarce resources, there was a tendency to recycle talent and materials from radio to commercial television and recreate vaudeville in the form of comedy-variety shows.
Both NBC and CBS financed their early television operations from profits earned in radio. In addition, RCA made money from manufacturing television sets. Commercial television fought Hollywood’s attempt to create an entertainment alternative in the form of large-screen television placed in movie theaters. It successfully opposed pay-television schemes brought before regulatory agencies. The Hollywood film companies were not allowed to own television stations. The scarcity of VHF licenses and an FCC-imposed moratorium on permits for station construction between 1948 and 1952 put the commercial-television industry in the driver’s seat in negotiating both with advertisers and producers of programming. Advertisers, no longer the sole sponsors of programs, conceded their licensing to the television networks while retaining certain rights to censor programming content with respect to subjects, characters, and language. From program producers, the networks demanded and received ownership and syndication rights for the shows in exchange for giving them a network time slot. The success of I Love Lucy and Dragnet during the 1952 season brought increased attention to filmed productions. That led to syndication of show reruns, especially in foreign markets.
In the mid 1950s, U.S. television audiences moved from an upscale clientele to include more lower- and middle-class viewers. More than half of Americans who purchased television sets in 1950 financed them on credit. “TV is becoming the poor man’s theater”, a journalist observed. While original plays such as Paddy Chayevsky’s Marty were shown on commercial television in the early 1950s, network executives soon realized that continuity of programming increased audience size. The popular western, Gunsmoke, ran from 1956 to 1975. Bonanza aired from 1960 to 1973. Soap operas had been developed for midday radio audiences during the 1930s as a means of selling soap powder to women. Commercial television took over this format. CBS, which had started well behind NBC in the television race, roared back in the 1950s to become the top-rated network. Its chairman, William Paley, had a good sense of audience tastes. He hired star comedians such as Red Skelton, Jack Benny, and Burns and Allen from NBC radio in the late 1940s and switched them over to television in the following decade. As always, the performers’ personalities were the key to attracting large audiences. Viewers grew comfortable seeing the same faces week after week on the silver screen.
CBS styled itself “the Tiffany network” because of its high-quality programming, especially in television news. Paley was a close friend and supporter of Edward R. Murrow, a journalist known for his hard-hitting investigative reporting. For instance, Murrow’s expose of Sen. Joseph McCarthy in a See It Now program aired in March 1954 helped to turn the tide of popular opinion against this anti-communist crusader. Yet, CBS’s prestigious news operation was subsidized by earnings from the more popular sitcoms, quiz shows, sports broadcasts, and action drama. The network competition for audiences brought programming down to the lowest common denominator of public taste and produced what FCC chairman, Newton Minow, in 1961 called “a vast wasteland”. Academic critics called for an alternative to commercial television. The FCC responded by reserving 242 channels, mostly on the UHF band, for educational broadcasters. The first noncommercial station went on the air in 1953, but public television languished for a decade. In 1960, only 7 percent of U.S. television sets were equipped to receive UHF signals. Then, after the quiz-show scandals on commercial television, Congress provided public subsidies for educational broadcasts and required that new television sets sold in the United States be equipped for UHF reception.
The two main commercial-television networks, NBC and CBS, were joined by ABC, the American Broadcasting Company, which was formed in the 1940s when the U.S. Department of Justice forced NBC to divest its smaller network. And, in 1986, Rupert Murdoch created the Fox Network from a group of independent stations. ABC was an also-ran until the 1970s when sponsorship of the Olympic Games and programming direction by Fred Silverman, CBS’s one-time entertainment strategist, lifted it for a time into first place. In the 1980s, NBC enjoyed a resurgence of popularity under the direction of Grant Tinker, a former Hollywood producer. In 1980, Ted Turner, owner of local station WTBS-TV in Atlanta, created an all-news channel, Cable News Network, which broadcast reports of international events 24 hours a day. After losing money for five years, Turner’s operation became profitable in 1985. It has become a forum by which the world’s political leaders talk to each other and monitor ongoing events.
The BBC began television broadcasts to home audiences in 1936, sooner than in the United States. These were discontinued during the wartime period, but resumed operations in 1946. Queen Elizabeth II’s coronation in 1953 aroused interest in this medium. In 1950, only the United States, Britain, France, and Soviet Union had regular television broadcasts. The Russians used television to “mold a Marxist-Leninist outlook and promote the political and cultural development of all the Soviet people.” The French, too, tended to have politically flavored programming; many television stations were either owned by politicians or the government. The British put more emphasis on educational and cultural programs. The market-driven Americans produced television shows that appealed to wide audiences. Family-oriented shows such as The Adventures of Ozzie and Harriet and Father Knows Best were popular during the 1950s. ABC’s Disneyland made its debut in 1954, combining Hollywood-produced entertainment with the opening of Disney’s first theme park. Reruns of U.S. television shows became popular in other countries. Gunsmoke, Rin Tin Tin, and The Lone Ranger were among Mexico’s Top Ten television shows. Japanese society in 1958 was described as “TV-obsessed”.
Ted Turner had long wanted to buy a television network. When that effort failed, he made a deal in 1985 to purchase MGM along with its library of 3,30l films plus 1,450 additional films from the RKO and Warner libraries. High debt forced Turner to sell the MGM studio and other assets, leaving him with the film libraries for which he had paid $1.2 billion, seemingly a high price. Turner realized, however, that the film classics which he had purchased, including Gone with the Wind and Casablanca, were a unique commodity. In a maturing industry, they were like ageless stars whose service commanded a high price. Turner colorized many black-and-white films, increasing their value in syndication. He established a cable-television channel, Turner Network Television, to show the films on a regular basis. Cable television was undercutting audiences tuned to the major networks. It was creating a new niche-focused mode of entertainment. Movie reruns were an important segment of cable as were sports, news, and educational programs. MTV, a youth-centered channel presenting rock videos, revolutionized the entertainment industry with its artful editing of bizarre visual images accompanying synthesizer-enhanced music. A freewheeling question-and-answer session with two hundred young people on MTV in June 1992 helped to propel Bill Clinton to the U.S. Presidency.
Television coverage has changed the nature of political campaigning. The classic event was the first Presidential debate between Richard Nixon and John F. Kennedy in September 1960. Many who heard it on radio thought the result was a draw. However, the debate gave Kennedy’s campaign a boost since, to the 75 million Americans who had watched it on television, Kennedy appeared vigorous and well-tanned while Nixon, who had refused facial makeup, seemed sickly. Kennedy had been coached to look at the audience while Nixon looked mostly at Kennedy. The lesson for subsequent candidates was that one’s visual appearance on television matters as much as the words spoken. On the other hand, Richard Nixon’s political resurrection in the 1960s might have been partly due to an appearance which he made on the Tonight Show with Jack Paar. After playing a short piece on the piano, Nixon quipped that this incident would surely doom his political future since, after Harry Truman, “Republicans can’t stand to see another piano player in the White House.” George Bush came off as a more engaging, likable candidate than Michael Dukakis in the 1988 candidate debates. Though Dukakis’ verbal proficiency was high, his body language suggested lack of emotion.
Television news has gravitated towards the cult of the anchor man. The avuncular Walter Cronkite, CBS’s news anchor, was known as “the most trusted man in America.” Political conservatives charged that Cronkite, a liberal, was able to slant the news by the intonations in his voice or by raising his eyebrows in a certain way. News organizations developed a format by which public officials communicated with the public through sound bites selected by television editors. They dodged questions posed by reporters such as Sam Donaldson known for ferocity of attack. In this tough environment, the most that political candidates could expect from news coverage would be to present an attractive visual image and avoid making gaffes. News editors controlled what part of their taped appearances the public would be allowed to see. Network commentators and pundits put a spin on its interpretation. The only way that a candidate could guarantee that his message would reach the public in its original state would be to purchase time for a paid commercial. However, television commercials were expensive, and the cost of running media campaigns has forced politicians to devote more time to fundraising and cater increasingly to well-funded interest groups.
Television has become a force in shaping social and political attitudes. Whether it was the Army-McCarthy hearings in the 1950s, the Vietnam war in the 1960s, the Watergate hearings in the 1970s, the clash between Clarence Thomas and Anita Hill in the 1980s, or the O.J. Simpson trial in the 1990s, television coverage has created a national morality play which polarizes the public along ideological or demographic lines. Its programming reflects the fact that sponsors want primarily to attract female viewers between the ages of 25 and 54 because they are prime shoppers for the kinds of products advertised in television commercials. Therefore, much of the programming that appears both in prime time and during the day is calculated to appeal to women. Women are shown in strong professional roles, often matching wits with men and winning. A male ghetto on television is found in the weekend sports programs. With respect to racial stereotypes, the days of Stepin Fetchet have given way to shows presenting confident black males often in military or police roles. On the other hand, the national preoccupation with the Clarence Thomas-Anita Hill conflict or O.J. Simpson’s murder trial may have continued a long-standing tradition of white Americans of viewing blacks as sources of amusement.
The distinction between news and entertainment is increasingly blurred. The news shows want higher ratings and entertainment-like features deliver them. So local television coverage tends to focus on crime, scandals, accidents, and other emotionally charged events. Tabloid journalism and talk shows such as Jerry Springer’s specialize in revealing personally embarrassing and lurid conflicts. The new “reality-based” television programs such as A Current Affair or Inside Edition are cheaper to produce than sitcoms because some of the work can be done by regular news staff. Partly, however, such shows reflect the current disposition to trash celebrities. That may be a reaction to their contrived nature. The entertainment world is populated by a galaxy of young, physically attractive men and women who behave in a sure-footed way. Yet, the viewing public is aware that this world is essentially illusory. There may be a discrepancy between the public image and actual lives of those performers beheld at a distance in theaters or on the silver screen. People grow hungry for personal information, especially when their idols betray human frailties. NBC’s Tonight Show and its clones have offered a venue for interviewing celebrities.
Sports Broadcasts
An important type of entertainment is the presentation of artificial but unscripted events where a spectator does not know their outcome while they are taking place. Athletic contests illustrate this type of entertainment. The first televised game may have been BBC’s broadcast of a tennis match at Wimbledon on June 21, 1937. Only 2,000 well-heeled Londoners owned television sets at that time. The pictures were blurred. The BBC persuaded the Football Association to permit a telecast of the soccer match between England and Scotland held on April 9, 1938. There was a concern that broadcasting this game would conflict with attendance at some of the lesser sporting events in the London area. Sports broadcasting in those days also faced the challenge of unreliable equipment. The technicians hooked up the camera with thick cable which sometimes became disconnected, causing a sudden blackout. NBC kept two cameras on the fifty yard line at football games to guard against that possibility. Inadequate lighting was another obstacle. Even so, sports programming was a staple of early television because it was cheap. The networks, which lacked the resources to fill all their time slots with credible programs, paid little for this type of ready-made entertainment. It was common for saloon keepers to lure customers by offering televised wrestling or boxing matches.
Televised sports make successful entertainment because they combine a visually exciting spectacle with human drama. Each contest creates immediate winners and losers. Their physical strain and fatigue elicit a strong emotional response. After ABC sponsored the 1972 Olympics, it jumped from third to first place in the ratings. ABC executives realized that exclusive sports coverage was the key to network supremacy. While the 1972 Olympics in Munich commanded unusual attention because of the terrorist kidnappings, the television editors shrewdly focused upon individual athletes such as Olga Korbut to build human interest. They highlighted the ice skating and gymnastics routines. ABC’s sports director, Roone Arledge, developed many of the techniques used in sports broadcasting today such as slow motion shots, instant replays, computerized graphics, and the practice of panning the stadium crowd to establish a sense of kinship with television viewers. Monday Night Football with Don Meredith and Howard Cosell affirmed the importance of personality, the commentators’ as well as the athletes’, in sports broadcasting. Cosell himself said: “There is no damn way you can go up against Liz Taylor and Doris Day in prime-time TV and present sports as just sports.”
The way to involve audiences personally was to convert athletic events into stories about the athletes. For instance, after the South African runner Zola Budd accidentally tripped Mary Decker-Slaney during a race at the 1984 Olympics, sports broadcasters billed their subsequent appearance together at the Olympics as a “grudge match”. NBC’s coverage of the 1998 Winter Olympics in Nagano, Japan, seemed to be more a collection of documentaries about the athletes’ lives than reports of the competitions themselves. A favorite trick of sports broadcasters is the so-called “honey shot” - letting the camera linger for a moment upon an especially attractive female spectator in the stands to discourage the predominantly male viewers from switching channels. Sports figures have become frequent guests on talk shows, quiz shows, and other television programs. Their celebrity status allows them to command higher salaries and secure lucrative product endorsements. That has, in turn, put the spotlight on the money involved in professional sports, caused ticket prices to increase, and created pressure for team owners to demand public subsidies for building new and larger stadiums. Television money has changed the nature of sports from being a recreational pastime to big-time entertainment.
Gambling
A characteristic of television entertainment is that it concentrates mass attention upon a particular event. The trick is to make money from that situation. One way is to insert paid commercial messages next to the programming. Another is to tie entertainment directly to the process of spending money. If many people in a large audience each contribute small sums of money, their pooled contributions can finance a large payout to the winner of a gambling event. Because the outcome is unscripted, each person who places a bet has a small but real chance of winning the big prize. There is something within an otherwise rational individual which convinces him or her that betting money on games of chance with certifiably unfavorable odds is fun. Gambling experts refer to “the heat” - an irresistible feeling that one must continue placing bets to recoup past losses or continue a winning streak. In any event, the proprietors of gambling activities set the payouts at a level which assures a healthy profit margin for themselves. While individual fortunes are being won and lost in the games, the house always wins.
Gambling has long been tied to sporting events. More Americans attend horse races each year than attend professional baseball, basketball, and football games combined. An even more popular form of gambling is the lottery. A lottery run by the Continental Congress helped to fund the American revolution. Another supplied funds to build the city of Washington. Lotteries were abolished in Great Britain in 1826. A crackdown on them occurred in the United States during the 1830s. Congress closed interstate commerce to lottery materials in 1895. However, lotteries were revived in the 20th century to fund charitable projects. The Irish Sweepstakes, organized in 1930, used gambling proceeds to operate hospitals. The states of New Hampshire and New York established lotteries during the 1960s to support the public schools. Today all but two U.S. states, Utah and Hawaii, have legalized gambling in one form or another. The biggest gambling state is, of course, Nevada, where such activities were legalized in 1931. Much of the betting here takes place in casinos where card tables, slot machines, and wheels of fortune fill the rooms and line the halls.
Gambling was legalized in Nevada in the same year that construction of the Hoover Dam began. Construction workers who came to nearby Las Vegas to spend their paychecks were exposed to this activity. Nellis Air Force Base was not far away. The city’s first full-fledged resort hotel, El Rancho Vegas, opened on U.S. Highway 91 in April 1941. Three other resorts - the Last Frontier, Flamingo, and Thunderbird - soon appeared. Las Vegas acquired a reputation as a frontier town where gambling was legal. The Hollywood connection began when Clara Bow (the “It Girl”) and her husband, Rex Bell, bought a large ranch just outside town. Well-known film personalities were frequent visitors. In 1946, a gangster with Hollywood ties named “Bugsy” Siegel oversaw construction of the “Fabulous Flamingo” Hotel. Suspected of embezzling money from this project, he was killed a year later. However, Siegel’s vision of a big-time center of gambling and entertainment survived. Jimmy Durante became the Flamingo’s first headline performer. Lena Horne, Sophie Tucker, Dean Martin and Jerry Lewis, and Ella Fitzgerald were among the entertainers who worked Las Vegas during that period. The downtown gaming clubs along Fremont Street came to be known as “Glitter Gulch”.
Gambling was initially the main attraction of Las Vegas resort hotels, and entertainment merely an adjunct. However, competition among the hotels forced their managers to upgrade the live entertainment. Celebrity performers were the lure that enticed gambling customers from the other hotels. Casino executives evaluated how each headline performer affected the casino “drop” - its gambling profits - and paid accordingly for the next engagement. This sometimes resulted in huge paychecks for Las Vegas performers while nightclubs in places without gambling could not afford to book their acts. Patrons of Las Vegas casinos could expect to find a concentration of big-name entertainers such as Frank Sinatra or Sammy Davis Jr. A memorable event took place in November 1956 when Liberace and Elvis Presley performed an impromptu role-reversing duet. Presley put on Liberace’s glittering jacket, assuming for the first time an image which he and countless Elvis impersonators have made famous. Las Vegas also became the site of major sporting events such as the Tournament of Champions golf event and several heavyweight-championship boxing matches.
Gambling is today America’s most popular form of adult entertainment. The number of Americans who visit casinos has doubled in the past five years. More than 31 million people a year visit Las Vegas as tourists. America’s fastest-growing major city, it has eleven of the world’s twelve largest hotels. The amount of money bet annually in the United States exceeds the combined amount which Americans spend for automobiles and housing. Despite misgivings, the gambling juggernaut continues in part because many of those who are charged with guarding community well-being are themselves in on the take. State lotteries are widely used as substitutes for taxation; they seem to lawmakers like sources of “easy money”. Indian tribes, whose casinos represent the industry’s fastest growing segment, sometimes refer to gambling as “the return of the buffalo”. Today’s owners of Las Vegas resort hotels are no longer mobsters but large corporations and pension funds. In two years, $6 billion of new construction has taken place in Las Vegas to build such monumental attractions as “New York, New York”. In ten years, some say, there will be no more controversy about gambling. “Gaming”, as industry spokesmen prefer to call it, will merge with other activities to provide “one-stop shopping” for persons seeking entertainment.
It may be that a cluster of entertainment activities, including gambling and pornographic shows and, perhaps, prostitution and drugs, may emerge in Las Vegas and similar places under the heading of “adult entertainment”. If not controlled by organized crime, these legalized activities will be managed by hard-nosed business types, oblivious to their social effect. Meanwhile, another cluster of activities, centering in a place like Disneyland, might provide “family entertainment.” This would be entertainment suitable for children or, as they say, “for children of all ages.” When Disneyland first opened in 1954, an article in Reader’s Digest suggested that Disney had pulled off the entertainment industry’s first “triple play” in adding a theme park to a top-rated television show and a hugely successful movie business. (Since then, this company has acquired a professional sports team and a major television network.) Both Las Vegas and Disneyland are “virtual cities”, created by the culture of mass entertainment. Where once entertainment events required special places such as auditoriums or fairs to assemble the crowds, the process has come full circle in that mythical places like Disneyland have been created of images broadcast everywhere.
Narrowcasting
The television broadcasting industry, which has dominated the U.S. entertainment culture, shows signs of losing its cultural grip. The top three television networks in the United States - CBS, ABC, and NBC - held 85 percent of the prime-time audience when the 1980s began. By the end of the decade, only 67 percent of prime-time audiences were watching those programs. The percentage then dropped to 54 percent in the summer of 1992, and, by the summer of 1997, to 40 percent. In 1997, almost as many Americans were watching cable-television shows as those on the big networks. Competition from cable television, the new Fox network, and personal computers have combined to produce serious erosion of audiences for network television, CivIV’s form of empire. The average U.S. household owns two to three television sets and receives more than forty different channels. Eighty percent own VCRs. Ninety-four percent have remote controls. The ability to switch channels is as easy as pushing a button while one reclines on a sofa.
Cable television offers dozens of different channels catering to specialized interests. Web sites number in the hundreds of thousands. With so many more people watching such spectacles on their television or computer screen, the trend has been to move from radio and television broadcasting toward what some have called “narrowcasting”. Entertainment is being pitched more narrowly to segments of the viewing public which share certain interests. Advertisers benefit from this trend because they can reach audiences known to be interested in their type of product. Their message can go out to groups of likely buyers without having to pay the higher broadcast rates. This trend toward narrowcasting may, in part, represent public reaction to the dumbed-down culture of commercial television. People are bored with the one-size-fits-all programming. They want variety tailored to their particular interests. Partly, however, it may reflect the changing nature of communications technology. Computers have become more important. The computer adds an interactive capability to communications. Allowing multilateral contact among individuals, it is the ultimate specialized communicator.
The trend is to give individuals exactly the kind of entertainment that they want. What viewers choose, in turn, drives marketing strategies to develop programming content and sell products to the various audiences. The computer is an important tool in this process because it allows entertainment executives and advertisers to track individual preferences. Data from viewer surveys and product sales are collected in computer files and then analyzed. This information becomes a basis for creating demographic profiles to be used in marketing campaigns. Individuals fitting a profile are included in the target audience for a particular sales message. Because the message is tailored to known attitudes of the group, the sender can be reasonably sure that it will strike a sympathetic chord. This is where commercial advertising is headed. We are no longer one people, able to be reached through broadcast messages, but a population segmented by demographic identity and personal interest. The sellers of products communicate separately with each known type. In this brave new society, we are defined individually by the mailing lists that include our names.
Computer-Generated Entertainment
The computer’s ability to store aural and visual images and change them in desired ways has taken entertainment to the next level. Digital samplers can create music by modifying sounds stored in the form of numerical codes on a floppy disk. By altering the codes for previously recorded sounds, this device can raise or lower the pitch, speed up or slow down rhythms, insert new musical segments, or overlay the sampled sounds to give an orchestral effect. The music is clearer than what synthesizers used to produce. Likewise, computer-generated graphics have revolutionized filmmaking. For example, technicians working on a keyboard with a five-second clip from Interview with the Vampire which shows Tom Cruise bending over to bite his next victim can alter details such as the color of blood smeared on Cruise’s face and the size of his fangs to increase the sense of horror. Real-life actors and actresses have become models for cartoon characters, giving them a more realistic appearance. Ever since George Lucas’ Star Wars, Hollywood filmmakers have successfully used computerized images to create scenes that could never be shot by camera. Action pictures relying upon such special effects have become some of today’s biggest box-office hits.
Taking entertainment a step further, computer technicians have developed a technology called “virtual reality” to produce images that change in response to the viewer’s physical movements. In the 1980s, a firm in California called VPL Research invented a set of goggles and gloves linked to a computer. If the viewer turned his head sideways, sensors inside the goggles would relay that information to the computer which would then create images in the goggles reflecting the changed scenery. Finger movements inside the gloves to simulate firing a gun might send imaginary bullets to a target on the screen. VPL Research’s DataSuit, resembling an aviator’s jump suit, was lined with more than fifty different sensors attached by fiber-optic wiring to a computer, which allowed visual images to change following some action on the actor/viewer’s part. Unlike previous technologies which merely recorded sense impressions, computers can alter them to achieve this life-like effect.
During the 1990s, interactive entertainment became a feature of theme parks and shopping malls across the United States. This industry already earns more revenue from youngsters dropping quarters into machines than first-run movies do. The realistic action games are an outgrowth of military flight-simulation technology. The BATTLETECH center, which opened in Chicago in August 1990, was an early application to popular entertainment. Visitors to the center played a combat game on virtual-reality terrains by manipulating buttons and switches on control panels. English entrepreneurs have created a similar role-playing game based on “Dungeons & Dragons”, featuring mythical characters in a medieval setting. “Merlin’s Magical Motion Machine” at the Excalibur Hotel in Las Vegas takes audiences on imaginary high-speed train and roller-coaster rides. Its seats are equipped with lock-down safety bars. The “Star Tours” ride is Disneyland’s most popular attraction. Blockbuster Entertainment Corp., the video-rental giant, has branched out into the area of creating “high-tech adult playgrounds” which take customers on virtual-reality tours of city streets. A firm called LunaCorp. has even proposed placing a dune buggy on the Moon’s surface which customers at a theme park on earth might operate by remote control.
Predicting the future
Some Patterns in World History and How they can be Used to Predict the Future
Civilizations belong to a living culture and have characteristics of living organisms. They rise and fall in the cycles of life. World historians have identified certain societies that have gone through the complete cycle. Rome's civilization, once powerful, has now become extinct. So have civilizations of the Babylonian, Mayan, Sinic, Indic, Syriac, and other societies.
Civilization, in a broad sense, transcends the life cycles of individual societies, passing its culture along to peoples in many parts of the earth. But they, too, have come one after another to comprise successive historical epochs. Four civilizations have already come and developed to a mature state. Another has recently appeared on the cultural horizon; it remains in embryonic form. That makes five civilizations altogether, which are:
Civilization I: This is the earliest form of civilized society beginning in the 4th millennium B.C. with the rise of Mesopotamian and Egyptian city-states and culminating in the four great empires - Roman, Parthian, Kushan, and Han Chinese - of the 2nd and early 3rd centuries A.D. Its age was characterized by by conflict between nomadic and agricultural societies and by wars and political empire-building. The technology of writing (originally, in ideographic form) supported its culture.
Civilization II: This is what civilized societies became after the philosophical and spiritual awakening of the 6th and 5th centuries B.C. which was, in turn, related to the invention of alphabetic writing. Although this civilization was begun in a period dominated by political empires, it came into its own after the Huns and other nomads destroyed these empires between the 3rd and 6th centuries A.D. The dominant institution in society became religion. The three world religions - Buddhism, Christianity, and Islam - and other religious or philosophical systems such as Hinduism, Judaism, and Confucianism dominated human culture in the first 1,500 years of the Christian era.
Civilization III: This is the civilization of European secular culture which began with the Italian Renaissance of the 14th and 15th centuries A.D. and continued through the first two decades of the 20th century A.D. Humanist literature and art as well as empirical science mounted a challenge to philosophically based religions. This civilization was predominantly commercial although secular education also played an important role. Society became organized in European-style nation states. The technology of printing supported its culture.
Civilization IV: This is the culture of news and entertainment that we have come to know in the late 20th century. Advertising drives commerce, and the media in which advertising takes place (especially television) become powerful institutions within society. Various electronic technologies such as the telephone, sound recordings, cinema, radio, and television support this culture which emphasizes the sensuous aspect of human personality.
Civilization V: All we know about this culture is that it is computer-based. Computers, which support two-way communication between man and machine, are quite unlike the technologies of mass communications. However, computer-based systems and applications are developing so rapidly that it is hard to predict what will come next.
Four of these five civilizations have seen the light of day. The fifth - the computer-based civilization - is like an infant opening its eyes for the first time. Because world history now contains a fairly complete record of the first four civilizations, it is possible to identify certain historical patterns that apply to civilizations as they develop from one stage of life to another, based upon examination of these civilizations. Once the general pattern is established, it then becomes possible to apply this to the civilization that is just now beginning to develop, giving us a glimpse into the future.
Some of the patterns that appear from an examination of the first four civilizations include the following:
(1) When a new technology with radically different and improved capabilities of communication is first introduced into society, it will profoundly change the culture and, indeed, mark the beginning of a new civilization. Qualities inherent in the technology help to shape this new culture.
(2) New civilizations produce new institutions of power as functions once handled informally become organized, detach as separate power centers, and assert political and cultural dominance.
(3) Each civilization develops its own dominant beliefs and values, its own models of attractive personality, and its own "religion" in a broad sense.
(4) Civilizations follow a life cycle in which their period of "youth" is marked by vigorous growth and cultural creativity, their period of "adulthood" is marked by the formation of empires, and their period of "decline" is marked by institutional coercion and violence involving those empires.
(5) Themes or values that prevailed at the beginning of an historical epoch often give way to their opposite as the epoch comes to an end.
(6) The arrival of a new civilization also affects institutions that were dominant two epochs earlier. Such institutions undergo a democratizing process.
Now, to apply these patterns to the fifth civilization, we can ask these questions:
(1) Assuming that computer technology heralds a new civilization, what qualities in this technology will shape the new civilization?
(2) What new institution(s) may become dominant in the society?
(3) What will be this society's type of "religion", including its dominant beliefs and models of attractive personality?
(6) Since commercial and educational institutions became organized during the third epoch as separate centers of power, one anticipates that they will be the focus of change two epochs later - as the fifth epoch of history begins. What changes might be anticipated in these two areas as computer technology is applied? How might "democratization" take place in education and in commerce?
Questions related to the fourth and fifth patterns are difficult to frame since only in retrospect do we know the prevalent forms of a civilization.
.
(1) When a new technology with radically different and improved capabilities of communication is first introduced into society, it will profoundly change the culture and, indeed, mark the beginning of a new civilization. Qualities inherent in the technology help to shape this new culture.
Civilization I: Writing was invented in Mesopotamia around 3100 B.C.; in India, around 2500 B.C.; in China and Crete, around 2000 B.C., etc. The dates of this invention roughly coincides with the appearance of city-states and warfare between these states, leading later to kingdoms and empires. Ideographic writing was too difficult to produce a literate population but was mastered by professional scribes. Written records were necessary to support government and religious bureaucracies.
Civilization II: Alphabetic writing was first developed in the mid 2nd millennium B.C. in Palestine and Syria. However, it did not really take hold until around 1000 to 600 B.C. This type of writing let a broader and more active segment of the population acquire literacy. Inquiring minds began to explore the concept of words. Religious scriptures began to appear. These events supported the practice of philosophy and the development of the world religions.
Civilization III: Printing came to western Europe in the mid 15th century A.D. This cultural technology helped the dissemination of knowledge and supported a system of universal education. Printed newspapers contained commercial advertising which became the principal way that businesses communicated with customers. Printed books were a primary resource in schools.
Civilization IV: Various devices of electronic communication were invented in late 19th and early 20th centuries. These media created an ongoing spectacle of public events in real time. They attracted commercial advertising, replacing newspapers as businesses' primary tool for selling products. Individual performers became widely known in society.
Civilization V: Computer networks became popular in the 1990s. They permit two-way communication between individuals on the Internet. Customers for products can communicate directly with producers. Search engines hone communications to the user's particular needs.
(2) New civilizations produce new institutions of power as functions once handled informally become organized, detach as separate power centers, and assert political and cultural dominance.
Civilization I: Government detaches from the temple. Royal courts are established. Warring kingdoms create political empires. Culmination in the Roman, Parthian, Kushan, and Han Chinese empires of 2nd century A.D.
Civilization II: Philosophers offer to reform governments. Religious prophets challenge imperial power. The institution of world religion arises alongside secular power to govern society in a dualistic arrangement.
Civilization III: The Crusades spur Mediterranean commerce. Moneylenders finance wars between kings, Popes, and Holy Roman emperors. The wealthy merchants of north Italian cities were patrons of the arts and educated their children in humanist studies. From these beginnings commercial empires grew. Universities sprouted across Europe and America.
Civilization IV: The entertainment industry appeared as one, among many, in commercial society. When radio and television became the dominant entertainment media, entertainment became a key to selling commercial products.
Civilization V: The Internet was a complex of connected computers pioneered by the Defense department and several universities. Suddenly new commercial opportunities were found.
(3) Each civilization develops its own dominant beliefs and values, its own models of attractive personality, and its own "religion" in a broad sense.
Civilization I: This was the epoch of civic religion, when cities had protector gods and emperors were deified. The dominant personality was the conquering king - Pharaoh, Julius Caesar, or Alexander the Great.
Civilization II: World religion, as religion, is well understood. This epoch found its attractive models of personality in philosophers and religious prophets. The great kings and emperors were persons versed in philosophy such as Marcus Aurelius, Asoka, or Alexander the Great.
Civilization III: While the world religions continued in force, a new secular culture of literature, art, and music became a carrier of spiritual value. The dominant personality was the creative artist or writer. Political leaders who wrote good prose took on the attributes of greatness. Frederick the Great mastered French prose writing. Thomas Jefferson and Abraham Lincoln were known for their prose eloquence. Winston Churchill was another accomplished writer.
Civilization IV: The world of films, music recording, radio, and television constitute a cultural "heaven" in which performing artists are bright stars. Entertainers-turned-politician, such as Ronald Reagan and Jesse Ventura, are in synch with this age.
Civilization V: We do not know what will be the "religion" of this epoch, or whether a geek-like personality will be politically successful.
(4) Civilizations follow a life cycle in which their period of "youth" is marked by vigorous growth and cultural creativity, their period of "adulthood" is marked by the formation of empires, and their period of "decline" is marked by institutional coercion and violence involving those empires.
Civilization I: The youth of this civilization occurred in a time so distant that little is known of it. The adult phase is marked by the formation of large political empires: Egyptian, Assyrian, Persian, Greek, Roman, Parthian, Sasanian, Mauryan, Gupta, Han, and successive Chinese empires. Around the time of Christ, the Mideastern world was tired of the fighting. People yearned for peace.
Civilization II: The youth is marked by the lives of great philosophers and religious prophets whose lives are chronicled in classics of literature such as Plato's Dialogues or in religious scriptures. These classics tend to have been written at the juncture of spoken and written culture - when writing was still a novelty. Greeks were recently literate in Socrates' time. Jesus' life was told orally before the Gospels were written. Same with Buddha. Harb popularized writing in Mecca. The adult phase is marked by the institution of world religion - its ecclesiastical structure, its monasteries, its relation to secular power. In the phase of decline, the different religions fight each other: Christians fight Moslems in the Crusades and in Spain, Moslems fight Hindus in India. This is also the age of witch-burning and persecution of "heretics".
Civilization III: The youth is the period of the Italian Renaissance, the age of world discovery, the conquest of the Aztec and Inca empires, and other heroic times. This culture, recently exposed to printing, produced William Shakespeare, Cervantes, and others. Weary of theological disputes, European intellectuals turned to the study of nature. The adult phase saw the development of large commercial systems, of universities, and the nation state. In its phase of decline, Europe self-destructed in imperial rivalries and two world wars. Ideologies had turned hateful and destructive.
Civilization IV: Humanity turned to lighthearted pursuits following the two world wars. Popular entertainment became a larger part of public life. Empire in this epoch consisted of such things as the Hollywood studios and radio and television networks. Now there are signs of public disenchantment with big-time entertainment: the sex and violence of Hollywood movies and television, the destructive vision of rap music, a generation of young people who do not know how to read.
Civilization V: The Internet is an exciting place these days. Its proprietors have become fabulously wealthy as stock prices climb. Teenage hackers disrupt corporate websites. Internet use is growing by leaps and bounds. Beyond that, we do not know.
.
(5) Themes or values that prevailed at the beginning of an historical epoch often give way to their opposite as the epoch comes to an end.
Civilization I:
(a) This epoch is about the enlargement and consolidation of political power through military force. Its theme is one of the strong subduing the weak, of victory through bloodshed, of earthly grandeur and power. At the end of the epoch, contrary ideas came to the fore. Philosophers proclaimed that goodness is superior to wealth or power. Jesus taught that the last shall be first, and the first last and that the meek will inherit the earth. Succeeding the military conqueror was the "prince of peace". Originally applied to Solomon who followed the warlike David, this title was later applied to Jesus as Messiah. Also, two Roman emperors were princes of peace: Augustus pulled back to a more defensible border at the Danube river after the disastrous defeat at Teutoburg forest in 9 A.D. Likewise, Hadrian set the empire's boundaries at the Euphrates river after Trajan's unsuccessful attempts to reconquer land from the Parthians.
Civilization II:
(a) The early Christians were pacifists. Jesus offered no resistance to his captors. Christians at first refused to serve in Rome's imperial armies. But as Christianity became accepted within the Roman empire, Christians joined the army. Frankish kings gave the church territories in Italy which needed to be defended by force. Pope Urban II launched a new era in religious warfare when, in 1095 A.D., he gave his blessing to a Christian crusade to recapture the Holy Land from the Moslems. Religious warfare between Moslems and Hindus and even between Christians and Buddhists (in Japan) characterized the closing phase of this epoch. The ideal of peace had given way to that of war.
(b) Influenced by Plato's philosophy, early Christianity valued "things unseen" above those which could be seen. The body was evil; mind was good. Poverty was also a virtue. As the church gained worldly stature, it became wealthy. Churches were adorned with beautiful art. Massive cathedrals were built in 13th century France. Renaissance art joined forces with Christianity in the costly project to rebuild St. Peter's church - quite a thing to be seen.
Civilization III:
(a) Renaissance art is characterized by solid, round shapes that suggest palpable objects. Beauty lies in the perfected form of objects. By contrast, at the end of the third epoch the culture had become highly fragmented. Impressionist art did not attempt to depict form or shape but instead created the photographic impression of a scene from scattered dabs of paint. The disjointed forms of Picasso, atonal music of Stravinsky, Dada, objects trouve, etc., along with newly invented crossword puzzles, constituted a culture that was without cohesion or even coherence. This culture was without beauty in a traditional sense. Carl Jung compared it with the lacerated thought patterns of schizophrenic patients. A slogan at the Walker Art Museum in Minneapolis sums up the new ideal: "Bits & pieces put together to present the semblance of a whole." There can be no integrated whole.
(b) The European nation-state was in the ascendancy in late Renaissance times. One thinks of the trio of strong monarchs during the early 16th century: Henry VIII of England, Francis I of France, and Charles V of Germany, Austria, and Spain. In addition, Suleiman the Magnificent ruled the Ottoman empire and Akbar the Great ruled India. At the end of this epoch, the institution of the national monarchy was dealt a death blow as Kaiser Wilhelm of Germany, Franz Josef of Austria-Hungary, and Czar Nicholas II of Russia were removed from their thrones and replaced by democratic or socialist regimes. The Ottoman empire in Turkey was replaced by a democratic state.
(c) The third epoch of world history is characterized by the pursuit of wealth. European adventurers invaded the Americas in search of gold, enslaving Indians and Africans. Commercial colonies were formed in North America. Industrialization created new wealth. There was a reaction to this wealth in its waning days as labor unions were formed to oppose the owners of wealth. The antislavery movement reasserted human rights above the right to own people. Money was put in its place.
(d) The third epoch began with Europeans asserting control over peoples in other parts of the world. First, they overthrew the Aztec and Inca empires and colonized sparsely populated areas of North America. Later, Britain imposed colonial rule upon India. The European powers won trade concessions in China and carved up equatorial Africa as colonies. The 20th century, on the other hand, was a time when the European powers relinquished their colonies in Asia and Africa. Gandhi struggled for Indian independence. Mao Tse-tung and Ho Chi Minh brought independent rule to their nations under the auspices of communism.
Civilization IV:
(a) The entertainment culture began as fun - an unserious and safe activity suitable for children. This culture ends as big business seeking to protect its intellectual property and media conglomerates exploiting children by appealing to their violent instincts. The entertainment media have trivialized political discussions and, to pay for television commercials, forced politicians to seek money from special interests. This destructive side of the entertainment media overshadows its regenerative side.
(b) In the early 19th century, white Americans ridiculed blacks by supporting entertainment routines such as the "Jim Crow" ditty and blackface minstrel shows. In the mid 20th century, black entertainers gained respect as black athletes competed successfully with whites in professional sports and singers such as Elvis Presley appropriated black musical styles. By the late 20th century, political correctness ruled. Dramas portraying conflict between black and white Americans have usually put blacks in a positive role and reserved the villainous roles for whites.
(6) The arrival of a new civilization also affects institutions that were dominant two epochs earlier. Such institutions undergo a democratizing process.
Prehistory and Civilization II: In a preliterate culture, hereditary priesthoods preside over ritual-based religions through memorized formulae. In time, these priesthoods can develop a self-interest at variance with the interest of the larger community. Civilization II brought a reform of religion. Creeds and ideals became more important than ritual. Also, the priests of the new religions were selected from a broader segment of the population. The priestly positions were not hereditary but were, instead, based on meritorious or bureaucratic appointment. Buddha forcefully challenged the position of the Brahmin priests. He said: "No Brahman is such by birth; a Brahman is such by his deeds." The Buddhist monasteries admitted both men and women, persons of low as well as high birth. Likewise, Christianity allowed persons of low birth to rise in the ecclesiastical hierarchy. In contrast, Jewish priests had come from the tribe of Levi.
Civilization I and Civilization III: Imperial government was the dominant institution of Civilization I. In the third epoch of history, government underwent a democratizing process as commercial interests asserted their power in society. Democratic political revolutions occurred in England in the 17th century, in America and France in the late 18th century, and in Russia and China in the 20th century. Democracy replaced the hereditary basis of government leadership with a process of selection based on elections or bureaucratic promotions. Top government positions became open to persons of low birth. Abraham Lincoln was a symbol of that.
Civilization II and Civilization IV: The dominant institution of Civilization II was that of world religion: the church. How is organized religion democratized in the entertainment age? Some previously closed religious hierarchies have opened themselves to the ordination of women. Religion was a base of support for the black Civil Rights movement in the United States. Televangelist Billy Graham desegregated his rallies. American entertainment has provided opportunities for blacks and women.
Civilization III and Civilization V: The newly emergent, dominant institutions of Civilization III were commercial and educational institutions. Although we do not know what the fifth epoch of world history will bring, we can speculate upon the impact of computers. With respect to commerce, we know that business activity is strongly influenced by E-commerce. There is, indeed, a democratizing effect because the Internet lets merchants succeed without much capital investment. Anyone with a good idea and sound execution can succeed in selling products. Internet-related companies have created instant millionaires. With respect to education, the computer can create courses that will allow students to have high-quality, individualized instruction at a low cost. Therefore, all people, regardless of financial capacity, can afford to have a topnotch education - go to the best colleges, so to speak. That being the case, the fact that a person has attended one college rather than another should confer no social advantage. Computerized education will have a democratizing effect.
Perils of Prediction
The science of prediction has a spotty past. Abundant tales show the folly of attempting to foresee how one or another invention might fare in daily life. "Radio has no future. Heavier-than-air flying machines are impossible. X-rays will prove to be a hoax," said the eminent British scientist, William Thomson. In 1946, Darryl F. Zanuck, head of 20th Century Fox, took a dim view of television's future. "People will soon get tired to staring at a plywood box every night," he predicted. President Rutherford B. Hayes said of Bell's telephone: "That's an amazing invention but who would ever want to use one of them?" Concurring with that sentiment, a Western Union memo commented: "This telephone has too many shortcomings to be seriously considered as a means of communications."
Others took an overly optimistic view of technologies that were emerging in their day. A vacuum cleaner manufacturer predicted in 1955: "Nuclear powered vacuum cleaners will probably become a reality within 10 years." A writer for the Brooklyn Eagle predicted in 1900 that "mail will be delivered to homes in pneumatic tubes." Futuristic scenarios conceived in the 1950s saw masses of people commuting to work in helicopters. On the other hand, there were many important inventions that no one foresaw: microwave ovens, Velcro, TV dinners, laser surgery, air bags, the Internet.
Knowing the future can be valuable if a person is able to position himself or invest his money to take advantage of an emerging trend. Stock-market advisers make a living from keeping abreast of the latest product developments in their area. Thousands of investors anxiously await each month's issue of the Gilder Technology Report. Its web site is jammed when the report is first posted on the Internet. Stock prices quickly shoot up when Gilder makes favorable comments about a technology or a company with products utilizing it. Gilder's own following virtually ensures that. But, of course, the first investors with this information reap the biggest rewards; investment news is soon discounted.
In the mid 19th century, a group of intellectuals clustered around Ralph Waldo Emerson were inspired by the thought that American culture would soon equal or surpass European culture. No one embraced this idea more enthusiastically than Walt Whitman, the poet, who wrote in Democratic Vistas: "I, now, for one, promulge, announcing a native expression-spirit .. for these States ... different from others, more expansive, more rich and free, to be evidenced by original authors and poets to come, by American personalities ... and by native superber tableaux and growths of language, songs, operas, orations, lectures, architecture - and by a sublime and serious Religious Democracy sternly taking command ... and from its own interior and vital principles, reconstructing, democratizing society." What actually came, when American culture triumphed a century later, was popular culture - films rather than operas, rock lyrics rather than poems, vaudeville, cartoons, sitcoms, and other unserious works. Few professed to be creating expressions of democratic culture. Except in the Soviet Union, that kind of thinking was out of date. Whitman could not have anticipated the impact of new communication technologies upon cultural expression.
The most sweeping kinds of prediction have been associated with religion. From time to time religious prophets have appeared to announce that the world would shortly end. William Miller brought thousands of his followers to the hill tops of Massachusetts and New York state to await that event, expected to occur within a year after March 21, 1843. When this period of time had lapsed and all seemed normal, Miller rescheduled the apocalyptic date for October 22, 1844. Its failure to occur was dubbed "the great disappointment". The Mormons, Jehovah's Witnesses, early Christians, Branch Davidians, Heaven's Gate cult, and others have had similar expectations; yet, to date, the world as we have known it through history remains largely intact. It is therefore conceded that attempts to predict ends of the world or any larger course of events will and should be met with considerable skepticism.
In 30 B.C., right after Octavian defeated Antony and Cleopatra at the battle of Actium to become undisputed ruler of the Roman empire, an historian might have made several predictions. First, recognizing that a series of warlords (sometimes in partnership) had ruled Roman society for more than a half century, he might have foreseen that the relatively inexperienced Octavian, Julius Caesar's nephew, would eventually lose out to someone else in a power struggle. He might have foreseen that the raging tensions between rich and poor would tear Roman society apart or, perhaps, be resolved in the Senate. None of these things happened. Octavian had unexpected political and administrative skills which allowed him to consolidate power in himself and found Rome's first imperial dynasty. Dynasties of this type lasted in the West until the 5th century A.D. and, in the East, until the 15th century A.D.
The same historian, looking at Rome's position in the world, might have made several other predictions. Recalling that the Persians had conquered the Medes and Babylonians, and that Alexander the Great of Macedon had conquered Persia, and that Rome had conquered the remnants of the Seleucid, Ptolemaic, and Macedonian Greek empires, he might pessimistically have expected that some new political empire would conquer Rome's, perhaps the fierce Parthians to the east. Or, taking a more optimistic view, he might have expected that Rome would conquer the Parthian empire. Neither happened. Rome continued to withstand the Parthians despite centuries of warfare. The Parthians, succeeded by the Sasanid Persians, likewise staved off defeat at the hands of the Romans. Recalling Julius Caesar's successful prosecution of the Gallic wars, this historian might also have expected the Roman empire to expand into barbarian territories to the north and east. This possibility was only partially fulfilled. The Romans did conquer much of Britain and Rumania; however, their attempt to expand eastward into Germany was frustrated when Germanic tribes led by Hermann decimated three Roman legions in a battle fought in 9 A.D. Octavian, now Augustus Caesar, subsequently fixed his empire's eastern boundary at the Danube river.
Rome's ultimate fate was completely off this historian's radar screen. Despite Hermann's victory, it would have been most unlikely that Germanic or other nomadic tribes could overrun the western Roman empire, sack Rome, and establish petty kingdoms throughout western Europe while Roman government would last in the eastern provinces for another thousand years. Even less likely would have been that a religious prophet from Galilee, condemned by action of a Roman proconsul in Judaea and executed for blasphemy sixty years later, would come to be worshiped as "Son of God"; and that his cult, after centuries of persecution, would first claim a sizable share of Rome's population and then become Rome's state religion; and that the new religion of Christianity would provide the cultural matrix for post-Roman society, converting Rome's nomadic conquerors, and then spread into lands throughout the earth. World religion as a successor to political empire would have been most inconceivable.
Fifteen hundred years later, the possibility of religious empire was plainly seen. Militant Christians who had expelled the Moors from the Iberian peninsula were eager to win new souls for Christ. Alexander VI had issued a papal bull in 1493 dividing the newly discovered lands outside Europe between Spain and Portugal on condition that they convert the people of those lands to Christianity. A plausible scenario, given Europe's destined expansion of influence, was that the Roman church would eventually rule the entire world. It did not happen. Although Jesuit priests supported by the Spanish and Portuguese colonial governments converted the native peoples of Latin America to the Roman Catholic religion, similar efforts in the Far East failed when the Chinese and Japanese governments expelled Christian missionaries in the 17th century. Europe itself became religiously divided during the period of the Protestant Reformation. Despite the Pope's declaration, the French, Dutch, and English colonized North America; they seemed more interested in obtaining commercial advantages than in spreading the Christian religion. The times were turning away from religious ambition and instead embracing such things as commerce, science and technology, literature and music.
So it would seem that would-be predictors of the larger trends would consistently have been frustrated had they foreseen world history as a logical progression from things in the past. New institutions and new sets of concerns arise to replace those known in the past; and it seems that the future will gravitate more towards what has never been than what was. Of what use, then, is history in predicting the future?
All we can say is that history is our main source of knowledge about how the world works in concrete situations. Political leaders charged with making important decisions often let historical analogies guide their decision making process. For instance, Harry Truman wrote in his autobiography that he saw a parallel between the Congressional "Committee on the Conduct of War" established during the U.S. Civil War, which became a center of espionage for the Confederacy, and a similar investigating committee which he chaired during World War II. He therefore took extra precautions to make sure that this committee did not leak valuable information to the Nazis. "Almost all current events in the affairs of governments and nations have their parallels and precedents in the past," Truman wrote. "I know of no surer way to get a solid foundation in political science and public administration than to study the histories of past administrations."
General Jakabu Gowan, Nigeria's head of state during the war with secessionist Biafra, had read Carl Sandberg's four-volume biography of Abraham Lincoln. Identifying his own cause with that of the North, Gowan told reporters that he could recognize the "Shermans" and the "Grants" among his commanders. On the other hand, Adolf Hitler was mistakenly encouraged by President Franklin D. Roosevelt's death to hope that the Allies might relax their military pressure upon his crumbling nation because his hero, Frederick the Great of Prussia, had been rescued from probable defeat when Russian armies pulled back following the death of Catherine the Great. Such analogies may or may not follow through.
To predict history on the broadest level we cannot rely upon any particular set of events proceeding from the present situation but only on general expectations based on the nature of human societies like the following: What goes up usually comes down. What is born dies. People fight for rank and position. Powerful interest groups try to protect their own turf. These are some of the "lessons" to be drawn from past history. On the positive side, the new is youthful and vigorous and creative, but also unpredictable. One must make allowance for unexpected paradigm shifts. Future history will frustrate our best efforts to project a certain vision unless, perhaps, we ourselves participate in the fulfilling events.
INTIMATIONS OF A FIFTH CIVILIZATION
Shape of the Computer Age
As humanity approaches the end of the second millennium A.D., three civilizations have passed into world history, a fourth has reached the stage of maturity, and a fifth appears now on the horizon. Since this fifth civilization is still in its infancy, we do not know what its epoch will bring except that computers will be the defining cultural technology. Our expectations of this new civilization are based on analogy with other civilizations whose histories are known. They are based on past events and present trends and on reasonable projections from them. But there is also a part of the history which cannot be known at this time. Predictions of the future are notoriously inaccurate. Be forewarned that the discussion in the remainder of this chapter will attempt that very thing.
In making our historical argument, we assume that the appearance of a new dominant cultural technology - in this case, computer technology - means that a new civilization is about to emerge. Each new civilization produces new institutions in society as its sectors become further differentiated. While their functions may also have existed in earlier times, they become better organized and are incorporated within the society’s power structure. The new civilization brings a new set of values, beliefs, and models of personality. There is a flow of events from previous civilizations. There are points of conflict. Values and ideas that are dominant at the beginning of the historical epoch may be overtaken by their opposite as it comes to an end.
The Nature of Computers and Related Projections
If the new civilization will be shaped by computers, then the direction of events may be related to the nature of this shaping technology. In that regard, one can make the following statements about computers:
• Computers can perform mathematical or logical calculations with great accuracy and speed. They can handle scientific computations or business recordkeeping functions with greater proficiency than human clerks or engineers could using mechanical calculators and sheets of paper. Certain calculations become possible which would otherwise have been too difficult to make in a reasonable time.
• Computers have the ability to keep track of numerous facts about individual persons and make lists of individuals who share certain characteristics.
• Computers can communicate with other computers over telephone lines. They can access information in a limitless number of files.
• Computers can store visual and aural images and manipulate these images in desired ways. The altered images can create the illusion of an imaginary scene.
• As computer technology progresses, its cognitive capabilities may approach those of the human brain. Man’s position in this world may be challenged by an equally intelligent creature of his own making.
The implications of these capabilities are many. If the cost of computers is compared with the cost of human workers who perform the same task, employers may decide to substitute investment in machines for hiring people. That decision may impact employment and social conditions. If computers can store and analyze information about individuals, that means that organizations can locate individuals more quickly and easily. More efficient marketing campaigns become possible. If computers can communicate with many other computers, the dissemination of information is greatly increased. Linked computers amount to another communications medium. If computers can create new visual and aural images, then creative expression can go beyond sense impressions captured on film or tape. This expands the possibilities of communicating for educational, business, or entertainment purposes. Finally, if computer intelligence rivals that of human beings, then the computer becomes potentially either a greatly upgraded servant or master of the human species. Human dominance of the earth is threatened. The world of science fiction comes into view.
Employment Implications
When business firms first embraced computer technology in the 1950s, the term “automation” was used to describe its application to industrial processes. That word conjured up the image of factories which operated on automatic pilot and did not require human labor. Social theorists then wondered how human beings might find employment if the old-style jobs disappeared. At its worst, the age of computerization might bring mass unemployment and poverty. At its best, it might lead to a society which could afford to support people without working. Proponents of the more hopeful scenario foresaw that government might redirect the wealth achieved by industrial progress toward the masses of people in an expanded welfare state. Income would be disconnected from work. Another possibility was that labor unions would continue to agitate for higher wages and shorter working hours. Even if machines assumed a portion of that work which human beings once performed, human workers would still be needed in other areas of productive enterprise and could command a healthy wage on that basis. Opponents argued that such adjustments were unnecessary. On its own, the economy would develop new kinds of output to keep everyone productively engaged.
By the close of the 20th century, the following picture has emerged: Business has continued to apply computer technology at a fast pace. Corresponding improvements in labor productivity have taken place. These developments have not brought a collapse of employment. In the United States, the current level of unemployment is low by historic standards. In western Europe, expectations of employment shortfalls have been realized to a greater degree. American labor unions have largely abandoned their campaign for shorter working hours. The overtime premium, meant to discourage working longer hours, has instead become a kind of wage supplement eagerly sought by some workers. As a result, average working hours have lately increased in the United States. The idea that the government would support people without working caught on for a time, but this practice, too, has lately lost steam. Welfare reform, forcing single mothers to seek paid work, has won bipartisan support. Paid retirement continues to persuade many older persons to withdraw from the work force, but its funding mechanisms are a perennial problem.
The salvation of employment has been continued growth of Gross National Product. Productivity increases have not meant growing joblessness because dollar-denominated output has also increased. Yet, although employment numbers remain strong, wage gains for most workers have not kept pace with the gains achieved in previous years. Some in upper-level managerial and professional positions have done well while a large segment of the work force is stuck in low-wage jobs. As for the growing output, a declining share consists of goods and services which are actually useful to people. Many areas of expanding economic activity could be characterized as waste. The gambling craze which has hit the United States produces a few big winners and some prosperity in communities near the casinos, but otherwise impoverishes individuals. The drug epidemic and war on drugs contribute to an atmosphere of lawlessness combined with increased job opportunities for police officers and corrections officials. Ours is an economy increasingly driven by lax credit policies, sweepstakes promotions, the conversion of holidays into times to hawk merchandise, and excessive litigation and medical treatment.
Modeling the Natural World
Computers have been used to model the natural world. Scientific knowledge is expressed in the form of mathematical equations that express relationships in nature. To become useful, this knowledge must be applied to real situations. Computer users in the scientific community have developed techniques of simulating natural conditions by observing and measuring conditions in various places and assigning numerical values to spatial locations for the relevant variables. For instance, data pertaining to temperature, humidity, and wind velocity, collected in several places, can be assembled in the computer to create the picture of an approaching thunderstorm. Calculation of scientific equations helps to predict how these various elements will interact and show how, in its entirety, the storm will develop over a period of time. Weather predictions, in turn, give advance warning of dangerous conditions. The damage can be minimized by taking timely action. Powerful computers are needed to record the large quantities of data and make the calculations in time to furnish a useful result.
Much scientific experimentation pertains to the worlds of the very large or very small. To make sense of events on that scale, the computer must convert mathematical data into visual images that will suggest something to human intelligence. Astronomers, for instance, have collected a wealth of data from electromagnetic radiation coming from distant places in the cosmos. Events which take place in this realm happen too slowly to register a perceptible change. The computer can speed up the process to show what will become visible after many years. Some of this information can be used to test theories of how the universe might have originated. There is an equally compelling need for structural models to make sense of phenomena on the submicroscopic level. If it were not for supercomputers, the emerging science of genetics could not absorb all the information that has become available about the chemical structure of the DNA and RNA molecules. These molecules, which govern all life processes, contain millions of genes in a particular sequence.
Computer simulations can demonstrate the effect of changes in a structural model much more easily and cheaply than if tests were performed on a physical model. They have revolutionized the design of commercial products ranging from jet aircraft to tooth brushes. Industrial designers once had to fit physical prototypes with sensing devices to measure stress and strain. Now they can observe stress changes with a few clicks of the keyboard as they play with a computer model. When the MacGregor Golf Company wanted to find the best design for a new golf club made of titanium, they tested a simulation on a Cray Y-MP supercomputer. The result was that additional slots and teeth were cut to stiffen the club and improve its aerodynamics. The automobile industry has used computer simulations to conduct crash tests on cars. Dow Chemical has used computers to test the absorbency of disposable diapers. Aircraft manufacturers observe the flow of air around variously shaped objects shown on a computer screen to pick a shape that minimizes turbulence. Computer models can be viewed as if in three-dimensional space. They can be rotated or displayed in a way which reveals their interior structure.
The new technology of Geographic Information Systems (GIS) embeds computerized information in maps. Each geographical location is identified not only by its spatial coordinates but by topological features and nonphysical characteristics such as ownership or political jurisdiction. The computer’s ability to combine information allows maps to be drawn with differently colored areas to represent significant conditions. For instance, a GIS cartographer might show in red all parcels of land in Lafayette County, Mississippi, used primarily for farming which were owned by persons over 50 years of age. Presumably a map of this sort might interest a salesman who specialized in selling products to elderly farmers. Algorithms can be written to direct an automobile at a satellite-detected location to a particular street address. Orbiting satellites equipped with cameras sensitive to radiation of various wavelengths can send data back to earth which indicates the type of vegetation found in particular regions. Special-purpose maps can be created from this information to illustrate, for instance, the effect of drought in nations which furnish certain agricultural commodities.
The graphic capability of computers depends upon digitalized information which can be manipulated to produce various effects. In that respect, it differs from earlier technologies which transformed visual images into electromagnetic impulses without changing the image. Computer graphics have cut the cost of film production while greatly expanding the range of visual possibilities. Computers can create images of objects that never existed. Cartoon characters can be “morphed” into human actors, and vice versa. Realistic-looking scenery can be created for imaginary situations such as the destruction of entire cities or warfare conducted in outer space. Some of today’s most popular films depend heavily upon computer animation. Action adventures such as Terminator II and Independence Day have dazzled audiences with vistas once scarcely imaginable. Daniel Hillis, a former computer-industry executive working at Disney, has said: “Just as the space program was the big driver of (computer) technology at one time, the new driver is the entertainment industry.”
Selling by Computer
Computers have transformed the arts of advertising and selling commercial products. Their ability to collect and analyze data pertaining to individual customer preferences has changed the paradigm of advertising strategy. David L. Milenthal, chairman of HMS Partners, explained that the purpose of advertising was previously to strengthen brand name or “to develop a far-reaching, creative brand personality for our products and services. Once that personality was developed and securely ‘branded’ into the consumer’s consciousness, we would use whatever funds were left to develop slightly more focused messages...” Now computer data bases allow advertisers to “identify and define individual customers”, especially those who tend to spend more money. “We are moving from the era of shotgun-style ‘mass marketing’,” said Milenthal, “into a new millennium of ‘customerization’ in which we have finally unlocked the key to actually knowing - not just predicting - the message and combination of communication vehicles that can best reach our clients’ customers.”
The new marketing approach is driven by two requirements. First, the message must be honed to suit the prospective customers’ individual preferences. Second, the subsequent marketing effort must involve enough people for the marketer to be able to take advantage of economies of scale in communicating with customers. The only way to meet these conflicting requirements is to identify narrowly focused personality types within a large customer pool. Sophisticated marketers with access to computer data bases can pull off this feat. Banks, utilities, telemarketers, direct-mail specialists, and other businesses having a large customer base are the types of firms which make use of customer-profiling techniques. The average American receives 553 pieces of unsolicited or “junk” mail each year from organizations that use targeted mailing lists. Each dollar spent in direct-mail solicitation typically brings back ten dollars in sales. This approach is twice as effective as spending for television commercials. Therefore, the volume of junk mail is expected to triple in the next decade. To be successful, however, the direct-mail advertiser must pay attention to detail in designing the letters and, above all, mail to the right list. Skilled analysis of computer data provides the information needed to do that well.
Fingerhut Companies, Inc., a large direct-mail retailer, mails out 130 different catalogs to computer-defined groups of customers. A mailing never goes out to less than 10,000 persons but the list can be as large as a million. Through an operation known as “data mining”, the company’s marketing experts analyze 3,500 variables representing its customer base with an eye to segregating individuals by groups with similar buying habits. This analysis helps Fingerhut predict how particular customers will respond to direct-mail campaigns. A software feature instructs the computer to search through the data file to find interesting but previously unknown relationships between customer variables which may reveal a propensity to buy certain products. Fingerhut uses this information to decide which customers should receive which catalogs and how many catalogs to print. It may cost the company $400 to $900 to print and mail 1,000 catalogs. Even a slight improvement in accuracy in defining the target audience reaps a huge benefit in increased sales and profitability.
Because of its potential to send electronic messages to a large audience, the Internet is being viewed as a promising medium to advertise commercial products. The trick is to lure viewers to a Web site where the advertisements are displayed. With so many different Websites to choose from, users can log on to sites representing highly specialized interests. Those who maintain or advertise on the Websites can be assured that their messages will reach a highly targeted audience. Costs can be kept to a minimum. Another advantage of computer-based communication is that the senders of messages can track the responses automatically. They can painlessly create a data base to record consumer preferences. Advertisers will know precisely how many and what types of people saw a commercial message and which of them purchased the product. IBM has worked with the sporting-goods retailer, L.L. Bean, and others to develop a catalog-like display on the Internet. Its software package includes order placement, shipping, and billing for products purchased online.
While less than half of today’s Websites are profitable, businesses advertise on the Internet because this market is expected to expand enormously in the years ahead. Annual sales on the Internet are expected to rise from between $7 billion and $13 billion in 1998 to $41 billion in 2002 and perhaps $103 billion in 2003. America Online, which had 13.5 million paying subscribers in August 1998, reports that 48% of them made online purchases, up from 42% that January. In 1998, Internet commerce was becoming dominated by three firms: America Online, Yahoo!, and Microsoft. Like an electronic shopping mall, Yahoo! alone offered more than two million products from 27,000 different stores. Online sales are expected to overtake catalog sales in 1999. Driving this surge in E-commerce are the fact that personal computer prices and connection times are dropping, Web pages are becoming easier to use, and fragmentation of markets is being overcome by large Web sites and better search engines. Because of high processing costs and privacy concerns associated with credit cards, several firms have explored new methods of payment using digital money or smart cards.
The most popular products purchased online to date are computer hardware, travel tickets, and books. Online merchants are able to customize advertisements to the consumer. For example, someone who orders a book from Amazon.com may receive a message recommending another book which other customers who fit a similar buying profile have enjoyed. He or she may consider this “permission advertising” (which is “anticipated, personal, and relevant”) to be helpful rather than intrusive. Search engines called shopping “bots” (robots) allow customers to look through numerous Websites in search of the lowest price for particular products. They can order the products online or else use this information to haggle with conventional merchants. While one would suppose that consumers might prefer to do their own shopping for food at supermarkets, a firm called Streamline in Boston takes grocery orders on the Internet and delivers to the household for a flat $30 per month. Customers save an average of four hours’ personal time for each order placed. While they cannot squeeze peaches or Charmin on the Internet, customers can read ingredient labels. Streamline’s founder predicts that people will eventually do the bulk of their shopping through electronic clearinghouses.
When journalists describe the promise of computer technology, they tend to overlook some of the more useful applications in favor of the frivolous. They have reported, for instance, that computerized sensing devices in Bill Gates’ mansion automatically turn the lights on and off or play a person’s favorite music as he walks from room to room. A more compelling need would be for computers to liberate humanity from the knowledge requirement that comes with purchasing and maintaining technically sophisticated machines. Karl Marx once said that for capitalistic markets to function properly customers needed to possess an “encyclopedic” knowledge of products and prices. An untrained housewife is supposed to know something about each of thousands of products and monitor their constantly changing schedules of prices. The knowledge crunch becomes acute in a society whose ease and convenience depends upon a host of highly technical products such as electrical appliances, automobiles, and building supplies. One needs access to a huge source of information both to shop intelligently in this environment and properly maintain the products after they are purchased.
A solution has been to build knowledge into the products themselves. Microprocessors embedded in the products can hold technical information that can be used to guide consumer applications or pinpoint repair problems. For example, diagnostic circuitry in a photocopying machine tells the user what is causing the machine to malfunction and where corrective action needs to be taken. Instead of calling a service technician, an office worker can fix the paper jam by following simple instructions. If a refrigerator malfunctions, a device attached to the machine can communicate with the manufacturer’s Website to receive pertinent information. Microprocessors control 83 percent of the functions in new models of cars including fuel-injection and braking systems. These miniature computers work to optimize performance in each area. There is also a move to install microchips in ordinary household appliances to maximize energy efficiency. Electric and gas utilities have installed home servers to control such devices as furnaces and air conditioners, which adjust them automatically to the right temperature settings and eliminate the need for meter readers.
If computers can monitor the operation of machines and signal when something unusual needs to be done, their human owner can safely forget about all but the machine’s intended function. Computers will tell the user when preventive maintenance is due. If a device is dangerously overheated, it will shut down automatically. Magnetic bumps or strips embedded in the machine may contain information like that in a repair manual. Downloaded to a personal computer, they might provide easy-to-follow instructions in a menu-like format. No longer would consumers need to search for an owner’s manual which might have been thrown out or misplaced. Whirlpool, which spends an average of $50 on each warranty service call, saves a considerable sum of money when information downloaded to the Internet tells the service technician which repair parts to bring. “You haven’t seen anything yet,” a Wall Street Journal article exclaimed. “Plug a constellation of devices into the Internet and the myriad gadgets of everyday life will get smarter and more useful. When people hook up their PCs to the World Wide Web, they transform glorified typewriters into windows on a world of information.”
Education and Training
Many children in the United States are exposed to computers in day-care facilities. Computer instruction becomes better organized in kindergarten and first grade. Games like Putt-Putt show preschoolers how adults handle daily living routines. The Mario Brothers’ typing game teaches typing skills. There are games to teach the ABC’s and games to teach math. Children learn about geography while playing detective in a game which sends them out on investigative assignments. In the early 1980s, educators began to develop a type of instruction that linked the Apple II computer to video playback machines. The idea was that, like video games, these computer-based systems would teach skills or convey knowledge as a by-product of entertainment. Students would work alone at the terminals on an assignment. If they failed to complete the assignment in time, a buzzer would sound which would alert a roving teacher to the fact that these individuals needed extra help. Some computer exercises functioned like tests which required students to remember information and immediately correct wrong answers. The computer blends sights, sounds, and texts into a set of expressions which can be customized to suit the individual’s learning style. Today, on average, U.S. schools have one computer per 7.8 students. Nine out of ten schools are hooked to the Internet.
In some situations, computer-based teaching has replaced traditional modes of instruction. Generally this takes place when real-life training would be too expensive or dangerous to offer on a regular basis. For example, jet pilots are normally trained in flight simulators which display appropriate visual scenery while a student pilot works the control panel. Virtual-reality machines have helped to train police officers to make the right split-second decisions when considering whether or not to shoot an armed suspect. Inexperienced surgeons can make their learning mistakes while operating on virtual rather than real patients. The U.S. Army developed the SIMNET system to allow as many as one thousand persons at a time to engaged in a simulated tank battle. Students at the Fort Knox Armor School learn military teamwork and artillery skills on computer-controlled equipment instead of tearing up nearby terrain. In virtual-reality machines, sensors attached to the viewer’s body send signals to the computer that change the scenery in response to bodily motions. If one’s head turns to the left, a new panorama will appear on the screen. The experience of watching electronic images respond to one’s own physical movement enhances the illusion that one is participating in an actual event.
Computer technicians have been experimenting with an application known as “augmented reality” which supplies work-related knowledge in real time. The workers wear glasses upon which diagrams, parts lists, and instructions are projected while they are performing work. Boeing thinks that this technology will help aircraft engine mechanics do their jobs more skillfully. The transparent glasses permit both real-life vision and computer-generated images such as the engine’s interior view. A type of software known as “electronic performance support systems” (EPSS) is being used by some U.S. companies to help lightly trained workers perform complex technical work. This software automates much of the knowledge used in job routines and provides on-the-spot instructions for human workers to figure out the rest. The National Association of Security Dealers has a program, CornerStone, which helps perform an audit of a securities firm. It takes an auditor through the major procedures step by step disclosing key ratios that might indicate securities violations. NASD estimates that use of CornerStone has cut average training time for employees from 2 1/2 years to one year. PricewaterhouseCoopers, a public-accounting firm, has used EPSS to train consultants. This software is especially good at mastering arcane detail.
As computer applications such as EPSS find their way to the business world, more relatively inexperienced job applicants may become qualified to fill technical and professional positions. That will increase the pool of applicants competing for a given position, which, with fixed demand, would tend to depress the wage offering. The adage, “what you earn is a function of what you can learn”, may become less apt. Predictably, the wage differential between workers with and without college degrees - currently 50 percent - will narrow as the knowledge which a prospective employee brings to the job becomes less critical. The narrowing of the wage gap may, in turn, dampen young people’s interest in continuing their educations. And so, computerized work may weaken one of the main foundations of the third civilization, which is the use of education to advance oneself in a career. Computers may do to the college-educated worker of the 21st century what industrial and office machines did to the high-school graduate early in the 20th century: Remove the element of personal knowledge and skill from jobs and make the working person a tender of machines.
Prediction by Analogy
Past civilizations, which have already run their course, can suggest the future of existing civilizations. The foregoing discussion has been based upon extrapolation from current trends. Events may or may not unfold as predicted. Now, in the remaining part of this chapter, we will follow up on the idea that the arrival of a new civilization brings changes to institutions formed both in the previous epoch and two epochs earlier. The fifth civilization, driven by computer technology, will shake the foundations of the current society in ways as yet unseen. If the past is any indication, one can anticipate that, on one hand, the news and entertainment media and, on the other, institutions of commerce and education will be caught in the vortex of fundamental change. Already a reaction seems to be taking place against empires of the entertainment culture. Broadcast entertainment is dissolving into a myriad of specialized communications. Bearing in mind that the creative energies of CivIV are not yet fully spent, this book speculates how those events will ultimately play. With respect to the second area of prediction, one can look to a transformation of institutions associated with CivIII. A profound democratization may affect institutions dating back to Renaissance times. While the discussion is speculative, we can at least focus upon areas of expected change.
New Ways of Deciding how to Buy Consumer Products
A person’s material needs are met by consumer products placed on the market. Each consumer must decide for himself or herself which products, among many, to buy. In making that decision, three considerations need to be addressed: (1) The product should meet the consumer’s needs as closely as possible. (2) The product should carry the lowest possible price. (3) The product should be convenient to purchase and deliver. Under the current system, product decisionmaking is influenced by paid advertisements in the media, by product displays in stores, and by other devices which attract attention as a person may be thinking of other things. Computer technology allows information about products to come to the consumer when he or she is focused on their need. This is a more intelligent way to buy products. The process serves the consumer rather than purveyors of merchandise.
The thought that a person needs to buy a particular product to suit a certain purpose in life normally arises from habits of upbringing. Such needs and wants arise from observation of others and from information obtained by word of mouth. In addition, paid commercials in the entertainment media plant ideas about branded products in relation to personal lifestyle. Such an approach to consumer buying is mostly hit-or-miss. A better approach might be to present information on Websites which address particular areas of life experience. Knowledge presented on the Internet would guide consumers to the right consumer products used in those contexts. For example, one such presentation might disclose what a homeowner needs to know to maintain the heating system in a house. The discussion would include explanations of commercial products to meet each need. Like television infomercials, these presentations would be visual and personal. Unlike the infomercials, they would not be directed towards selling any particular product but, instead, would inform consumers of the range of products available and suggest ways to choose intelligently between them. Once consumers begin to trust these various Websites, their proprietors would exercise a certain power, like that of the television networks today, in commanding people’s attention.
The bewildering variety of technically advanced products suggests a need to standardize product information in computer data bases. A beginning step might be to create a universal code number which would uniquely identify each consumer product in general use today. A segment of that code might identify its manufacturer. Complete product information would become available by typing the universal code number in appropriate places on a Website. Directed to a file for the particular product, the consumer could move about clicking on topics of interest. If the product malfunctions or needs repairs, answers to simple questions about the malfunction might bring explanatory texts to the screen. To type in certain keywords would direct a person to appropriate places in the text. Information downloaded from the product itself would help to narrow the search for instructions to solve the problem at hand. Computers make it possible to store in one place detailed information concerning the use, repair, technical specifications, and warranty protection about every major commercial product. Nothing needs to become lost or forgotten.
Carrying this concept a step further, one can imagine that the Internet might become a kind of electronic “Yellow Pages” to help consumers find local suppliers of products that they wished to buy. Given the name of a product or its universal product-code number, a search engine might explore the Websites of vendors in the area to learn which merchants carried the product and at what price. Each merchant might regularly upload data from the store’s inventory file to its Website. The consumer’s personal computer might list in ascending order of price all retail outlets within a certain geographical range which had the desired product in stock, giving their telephone numbers and street addresses. The computer might also disclose pertinent information concerning competing products. Computers linked to the Internet could thus provide a means of quick comparison shopping. This would take the guesswork out of shopping and force merchants to offer low prices in order to compete.
Another aspect of shopping is delivery of a product. How far the customer must travel to view or take possession of products becomes a factor in purchasing decisions. The computer can help to minimize this effort. Computer-based geographic information systems can relate two points on a map to a network of roads and calculate expected travel times between them. The computer knows the customer’s home location and the vendor’s business location from their respective telephone numbers. Given those two pieces of information, it can then calculate travel times between the vendor’s and customer’s locations from a file with geographical information. So customers can also do comparison shopping on the basis of delivery convenience. Once purchase transactions are entered into a computer, the information becomes available for many purposes.
Ultimately, computers may eliminate the need for individual customers to travel far from their homes to obtain merchandise. The “Streamline” model of shopping for groceries in Boston suburbs suggests a distribution method which could be applied to many other kinds of products. Instead of traveling to a store to view merchandise, place orders, and take possession of these products, customers might obtain information about products on the Internet and order the products online. The seller might arrange for delivery of products to the customer through a network of neighborhood depots. Two or three times a day, vendors might deliver ordered merchandise to trucks or vans which would make the rounds of urban neighborhoods, dropping the products off at the depots where customers could pick them up at their convenience. Besides improving transportation efficiency, this arrangement would eliminate the need for stores to carry a large inventory of goods for display purposes. The whole apparatus of contemporary retailing, including stores and shopping malls, would be called into question by this new method of displaying and delivering consumer products.
At stake here is nothing less than erosion of the connection between mass entertainment and sale of commercial products which underpins CivIV. If the customer knows how to obtain online the best products at the lowest possible price, why on earth would anyone follow some other method of making product decisions? Why base purchasing decisions on impressionistic commercials about a wide range of consumer products when persons prepared to buy a certain product can go straight to the source of its information and find out exactly what they need to know? Letting television entertainment determine one’s choice of consumer products makes sense only when the customer has no other place to turn. To the extent that people do their shopping on the Internet, the market will be forced to deliver a more suitable product at a more favorable price because customers will be able to compare many alternatives. Purchasing decisions can then be made free of the hoopla and hype that has characterized modern merchandising. The excessive, wasteful purchasing that arises from ignorance of product or price due to the free market’s huge knowledge requirement can be kept to a minimum.
The average American living in a city with a population of one million persons must pick and choose between as many as one million different commercial products. In the face of this awesome variety, the customer must somehow find products to satisfy each want or need. Television commercials, which occupy roughly 18% of total broadcast time, push products at people who happen to be paying attention to something else. This is, at least, one way of becoming informed about consumer products. If a better way comes along, people may still watch commercial television for the sake of the free entertainment but will turn to the more sensible alternative when it comes time to buy something. Prospective advertisers, recognizing the reduced effectiveness of the commercials, will become less interested in underwriting the cost of television programs. Entertainment will then have to find some other means of financial support.
Computerized Teaching
There is no reason why every mentally capable person in the industrialized world should not receive a high-quality education at a reasonable price. Cost should be no object. The knowledge itself is expertly presented in books. For at most several hundred dollars a year, one could own all the books that one could comfortably read and digest in any field of interest. Books can be recycled or borrowed for free from public libraries. Of course, most people need reinforcement of visual learning through classroom presentations and spoken discussions. For this part of the lesson, students could listen to audiotapes. Someone could read books aloud or retrace their themes in a recorded discussion. Better still, skilled teachers could perform on videotape. The tapes could record classroom scenes. Someone sitting in front of a large-screen, high-definition television set hooked up to a VCR would receive as much instructional experience from watching a videotape as that which could be had from attending a class. The only thing missing would be that the student could not ask the teacher any questions or be questioned. The fear of being caught unprepared would also be missing.
The element of interactivity, missing in books and videotapes, is among the computer’s capabilities. This electronic machine permits two-way communication between teacher and learner. Therefore, the gap between an actual teaching experience and a taped replay can be at least partially plugged. Once a successful teaching routine is captured in a computer memory, the performance supplied by a human teacher can give way to an automated procedure. Machine-based instruction can be inexpensively reproduced and distributed to students. However, the interactive function of teaching is not well handled by machines in the current state of their technology. Especially for younger children who must be taught by spoken words and personal example, real-life teachers are better instructors than electronic gadgets with mechanical interfaces and screens. The gap may narrow as new machines are developed with improved visual capabilities and voice-recognition features. Once computer software reaches the point of permitting an intelligent conversation in the English language, the two modes of teaching may appreciably converge.
Of course, we are not yet there. The idea of parking seven-year-olds in front of personal computers where they spend hours staring at a small screen and typing in answers to clunky questions is justifiably disturbing to many people. The average computer screen presents a grainy, two-dimensional image that permits thirty degrees of vertical and forty degrees of horizontal vision. That compares with grainless, three-dimensional images seen in a normal range of 155 vertical and 185 horizontal degrees in real life. The student’s interface with the computer might be a keyboard, mouse, button, knob, ball, joystick, wheel, or touch screen. Yet, technological advances in large-screen and high-resolution television will spill over into the area of computer monitors and screens. Computers which recognize spoken words will be improved and become more widely used. The technical barriers to simulating a lifelike classroom situation will, in time, be greatly reduced. Virtual-reality features will become available. A final ingredient needed to bridge the gap will be the creative vision to discover new ways of exploiting the teaching potential of computer-based technologies as D.W. Griffith once pioneered film techniques. Artist-educators of the future will surely rise to the challenge of creating an experience which is both educational and personally stimulating .
The education establishment has begun to recognize that educators who know how to teach are as valuable as skilled researchers. Larry Rudnick, an astronomy professor at the University of Minnesota, has been recognized as an effective teacher. His approach to teaching is, he said, “willingness to learn, to listen to students and to change,” recognizing that different people learn in different ways. In the middle of the semester, Prof. Rudnick surveys students about his course and makes mid-course adjustments. He admits to having stolen good teaching ideas from others. His classroom performance mixes lecturing, blackboard diagrams, slide presentations, and audiotapes. The question is whether the expertise which a human teacher has acquired from classroom experience can be transferred to a computer’s memory. Clearly, those types of electronic media which support the entertainment culture are insufficiently flexible to provide the tailored instruction that Prof. Rudnick and other outstanding teachers have put into their courses. Multimedia computer software does have that capability.
If teaching is to be perfected, its product must be able to be multiplied and expanded to reach larger audiences. Then resources will become available to polish its expressions to a high degree. Then we can see, on a broad scale, what type of lesson works. The need for standardized, high-quality lessons dictates that the thrust of education move beyond the human teacher to a machine-based system. The skills of an individual teacher perish when that person leaves, but teaching routines which can be stored in a computer’s memory last indefinitely. The most important educational lesson is to teach children how to speak. Since parents are typically the teachers, it is critical that society allow them enough time away from work for verbal interaction regularly to take place within a family setting. The company of other children also stimulates learning by example. At higher levels of education, the computer can draw upon its limitless fund of knowledge to enrich the teaching experience. Pimentel and Teixeira have written: “Someday, teachers will be able to take students to the bottom of the ocean without leaving their classroom. Students will play with atoms and make their own molecules in VR (virtual reality) to experience chemistry, instead of just reading about it.”
Conventional education is labor-intensive. A live teacher delivers a lecture to perhaps thirty students in a classroom, asks questions to see what the students have learned, and gives answers to the questions. Periodic tests tell how well the lesson has been remembered. To the extent that computer-based systems replicate the classroom experience, education is no longer limited by class size or by a teacher’s capacity to deal individually with a certain number of students. All students can have access to the very best teachers and receive frequent feedback from them. Present technology requires students to type in their responses to questions or click on items appearing in a list or menu. Because computers can store information about the user’s key strokes, they can monitor responses to questions and assess how well the students have learned their lessons. Testing can take place whenever students are engaged in a learning exercise, not just during official “tests”. All this has the potential to revolutionize the teaching process.
Knowledge Alienated from Workers
Today’s paradigm holds that extensive education is needed to handle the intellectually more challenging kinds of work found in a technologically advanced society. Knowledge-based occupations require more education and training. That may not necessarily be the case. Because an occupation involves a large component of knowledge does not mean that a human being has to provide it. Some of the knowledge can be supplied by computers. At one time, manual labor was an element in most occupations. Power tools and equipment, which supplement or replace muscular exertions, have changed the nature of blue-collar work. Henry Ford’s assembly line showed that in large-scale productions it was helpful to coordinate the flow of parts to workers who would install them on products. The parts arrived on overhead conveyor belts which dropped them onto the product as needed. Information and knowledge are today like those parts which were delivered to the assembly line. Computers are like the power tools. Employers can improve the efficiency of white-collar work by rationalizing the operation, identifying and defining the knowledge that is needed at each point, and arranging for its delivery to a worker just in time to be used.
Data-processing systems lay out the knowledge requirements for calculation by a computer. Flow charts show how one piece of information connects with another. The computer stupidly follows this scheme and the task is done. The next step, then, is to bring the human being into the flow of information processing. While doing a job, a worker knows what to do at each point in the work routine. This is knowledge become habit, born of long practice. The knowledge-laden techniques could be codified in some way. Theoretically, each person’s work could be analyzed and arranged in logical steps like information in flow charts. Computers could store the required knowledge and spit it out at appropriate times, reminding an employee of what to do next. Computer output could provide complete information needed to handle the work at each stage. In that case, even an inexperienced person could do complex work with reasonable competence. Knowledge-intensive managerial or professional work may require a bit more preparation, but even this can be formulated in a work routine. Each job carries a spectral shadow, which is the knowledge supporting its performance. The computer can hold this pattern of work performance in its memory as an intangible asset, drawing upon it as needed.
If offices of the future deliver just-in-time knowledge to employees, it means that the people who work there are not required to bring the knowledge with them to their job. It means that, except for basic literacy and math skills, prior education is largely unneeded. Workers would not have to remember an entire work routine, but only enough to handle the part which the computer has brought to their attention. No doubt, higher-level work positions demand a breadth of knowledge and flexibility of thinking that do not lend themselves so well to this kind of treatment. Intellectually creative functions involve unprogrammable experimentation and intuitive solutions. Yet, though a certain part of work performance must be left to human experience, more job skills can be reduced to discrete points of knowledge than their possessors may want to admit. Besides self-interest and pride, an obstacle to computerization is that busy, important people generally do not have the time to sit down with a scribe and recall everything that they do. That would take too much self-conscious reflection. The trick is to record the knowledge of work in some effortless way while a person is doing it. Those who work with computers, for instance, leave a record of keystrokes which can be automatically retrieved.
Much of the knowledge actually required in careers is of a utilitarian variety. No one needs to know Einstein’s theory of relativity or the themes of Shakespearean plays. Instead, there is a need to understand the comparative merits and technical specifications of particular commercial products or systems. Programmer analysts are hired not because they know the principles of computers as such but because they know UNIX, HTML, Windows 95, Java, or another type of software. This is a relatively unglamorous type of knowledge. Because commercial systems come and go, frontloading the knowledge into a student’s mind does not make sense. While there is no question that knowledge is an essential element in performing work and that employers must pay the going rate to obtain it, it does not follow that this knowledge resides only in certain specially trained or experienced persons. The technology exists to put knowledge in a form that can be accessed by persons of mechanical talents to produce a quick transfer. Temporarily certain individuals will have this knowledge ready to go. In the long run, however, no one can claim a personal distinction on the basis of possessing knowledge. The knowledge itself is alienable from persons.
If work-related knowledge is pried loose from the persons who developed it, then the worker’s position as a unique knowledge-laden commodity which fetches a certain price on the market is threatened. Who owns the knowledge to do a certain job, the employee or employer? If the employee, it is a possession that may have little use beyond the employer’s enterprise. If the employer, there would seem to be little justification for paying an employee a premium wage for possessing it. As computers are increasingly used in business, more work-related knowledge will be put into a form that can be extracted from individual workers and made accessible to others. Objectified knowledge, which is removed from an employee’s mind, will then seem to belong more to the employer, especially if it was acquired on company time. Possession of this knowledge would be like having use of any company-owned tool.
Some will say that the future looks bleak if working people are stripped of the one possession that gives them hope and security. If denied the ability to advance in a career through education and acquisition of superior knowledge, what do people have left? They have privileges of incumbency. Failing that, they have their humanity held in common with the managers and owners of the machines. They have their political rights which translate into the power to trump legal and business arrangements. Against the trend of contemporary political thinking, the ultimate answer must be to regard business-related knowledge as the common property of humanity. Henry Ford once said: “All so-called private fortunes are nothing less than public reserves. I have noticed that those who work exclusively for money ... do not retain it unless they continue the use of it for the public.” This statement bespeaks the attitude of creative capitalism. Uncreative capitalism, of course, has a different point of view. Here business managers are tight-fisted in dealing with employees and customers but quite generous when it comes to themselves. It is not just business executives but lawyers, doctors, educators, union members, assorted administrators and professionals, and many others who exhibit tendencies of greed.
Organized labor has exerted a check upon abuses of the managerial class. However, U.S. unions are weakened by declining memberships and a failure to raise issues on behalf of the larger community. Government, too, might challenge unfettered business power. Influenced by campaign contributions and free-market ideologies, current political fashions run in another direction. Values-forming institutions such as religion and the communications media have agendas unrelated to this type of concern. Another source of hope might be the educational sector. Unlikely as it might seem, this may be a promising place to turn public opinion around on questions of responsibility. If many who occupy leading positions in our society are socially irresponsible, a certain amount of the blame can be laid at the feet of the people who educated them. The growing disparity of incomes is, in large part, product of an attitude that people are differentiated from each other by skills imparted through education. The idea that some people have much higher incomes than others because of their prolonged or specialized education or their competitive worth in the free market will be shown to be spurious once the knowledge component is rationalized.
The fact is that progress in computers and other productive technologies could open up a better life for all people. Greater production efficiency and elimination of waste could provide more abundant output while reducing the amount of human labor required to have it. The fact that we have instead chosen longer hours, greater inequality of incomes, and more waste bears witness to the selfishness of a power elite which has diverted the fruits of production improvement to its own use. Many of these people are highly educated. They are persuaded that they deserve disproportionate wealth because of economic merit. Let their claim to advancement by superior knowledge be stripped away and they may recover some humility. For, it is not just blue-collar workers who are displaced by labor-saving technologies. If education is advocated primarily as a means of economic self-advancement, then, of course, we will have managers and professionals who loot the public. They may have to do this to pay back their student loans. But if the institution is driven by a spirit of inquiry and love of truth, and if tuitions are cheaper, then its graduates accept careers that include some element of self-sacrifice. Plato’s thought that students of philosophy will learn to love sublime ideas brings education back to a sound footing.
Rethinking College
Learning has never been the entire purpose of education. Social mobility has been a factor as well. The social historian, J.C. Furnas, wrote of colleges in 19th century America: “As colleges and universities fanned out toward the Mississippi, the prestige a boy acquired from having been to college came to outweigh considerations of what he might have learned there. To have been able to send him there was the outward and visible sign of economic arrival.” An upwardly mobile American might gain a certain satisfaction in sending his son to attend school with the sons of the Cabots, Vanderbilts, and Saltonstalls. Soon enough, the idea caught on that the same opportunities ought to be extended to poor but meritorious students. Soon there was a push to make a college education available to everyone. However, if social mobility is the purpose of attending college, a system of universal education is self-defeating. Not everyone in society can be upwardly mobile.
The computer, an information-crunching machine, has an infinite capacity to deliver knowledge. To the extent that computerized lessons replace live experiences in the classroom, then high-quality education is no longer limited by class size or a teacher’s ability to deal individually with students. There are, then, no schools that are better than the rest. Suddenly outstanding teachers become available in every field. Like any commodity, education becomes cheap when supply exceeds demand. Computers, fulfilling their potential, have the capacity to deliver an unlimited supply of superb education. It is only through scarcity that this becomes expensive. As a rule, the capitalist system is able to meet marketplace demand for any machine-built product. So it will be possible to increase the quantity of education to meet any level of demand. That means that, in the coming age of computers, everyone who wants it can have the same brand of high-quality education. No student need ever be rejected for admission to this type of college. That being the case, the fact that someone has attended a particular institution of learning ought not to confer any competitive advantage. Education ceases to be a factor in social stratification.
Of course, some learning is required for young people to start careers. Schools will have to deliver this product in a way which can be measured and verified. Beyond that, the idea that successful completion of a four-year program of study or of a particular package of professional training is needed for success in careers is largely a myth. If it were true that an academic degree indicates mental proficiency or the “stick-to-itness” needed to compete in today’s complex economic environment, then how could a college drop-out like Bill Gates go on to become the world’s richest man before the age of 40 in founding and managing a firm on the cutting edge of technology? There are just too many examples like this of persons with unimpressive academic records who later amass fortunes, or make important inventions, or become effective managers in complex technical fields for educators to continue to make a plausible argument about the need for their particular service to prepare for successful performance in a career.
The more that machines can produce a high-quality educational experience, the easier it will become to transfer work-related knowledge to previously unskilled individuals. The training function will become relatively cheap. If a large number of persons are trained in the same work function, the supply of persons able to handle the function increases and, all else being equal, wages drop. That has implications for schools which have sold themselves on the basis of bestowing higher lifetime earnings upon students in exchange for acquiring an academic degree. If graduates of expensive colleges find themselves competing on an equal footing with persons who have had a cheap computer-based education, then these more expensive institutions may fail to attract the desired number of students and may themselves need to shift into a cost-cutting mode.
It costs students an average of $20,000 per year in tuition, room, and board to attend a private, four-year college in the United States, and $8,000 year for a public four-year college. Harvard charges $31,000. College tuition costs have increased twice as fast as inflation during the past two decades. The cost of higher education includes not only the direct outlay for tuition, room, and board (minus financial aid received) but also the lost opportunity to earn income during those four years and to start a career at a time of energetic, impressionable youth. In cold financial terms, the $80,000 that a student, parents, or other benefactor puts towards education at a private four-year college could provide a comfortable endowment for purchasing a house, starting a business, or weathering the inevitable job changes during a career. On the benefit side, the student who graduates from college has instant credibility with employers. In a highly competitive job environment, academic degrees may be demanded for any career position with growth potential. All too often, however, one hears of college graduates who cannot find suitable jobs despite the investment which they made in their educations. Persons with Ph.D.’s are driving taxi cabs or working as file clerks. The education process comes, of course, with no guarantees.
A growing segment of college-age youth in the United States, especially young men, has chosen not to pursue a college education but instead take advantage of the immediate opportunities for high-paying employment that exist in today’s booming economy. The women, in contrast, have continued to move in lockstep toward obtaining an academic degree. As a result, 57 percent of American students who earned bachelor’s degrees in 1999 were women, compared with 43 percent in 1970 and 24 percent in 1950. Young males are increasingly attracted to technical jobs which can be entered after a quick period of training yet offer a good starting salary. When some computer positions pay as much as $75,000 or $100,000 after a few years on the job, this becomes an attractive alternative to spending the same time in pursuit of an academic degree. Traditionalists predict that the males are being shortsighted and eventually the advantage of continuing one’s education in terms of higher lifetime earnings will become clear. On the other hand, we could be in the midst of a paradigm shift in which traditional types of education lose both their luster and ability to deliver higher incomes.
The Idea of a University
Predictably, colleges and universities will need to reinvent themselves to keep their place in society. They will have to try harder to appeal both to students and employers. For students, who are their immediate customers, they can offer the following improvements: First, they can reduce the cost of the educational process by applying computer and other technology to a greater extent and by setting fees and tuitions at a fair mark-up over cost. Second, they can offer teaching in a variety of sizes and shapes, as suits the students’ individual needs, and confer credit accordingly. Third, they can assume continuing responsibility for their graduates’ economic well-being. They can assume the additional function of representing their graduates to employers. This could take the more limited form of becoming like an employment agency which finds jobs for its clients and, once they are hired, leaves them in another’s charge. More ambitiously, it could take the form of becoming the graduates’ nominal employer, like a temp agency. The university might assign people to work at various businesses but retain a loose supervisory role, especially as regards training and personnel functions.
For business firms, such universities would offer several advantages: First, the more standardized curricula and evaluation procedures would make it safer for employers to hire their graduates. The schooling would be a known quantity. Second, the reduced cost of the education would ease expectations of starting salaries. Third, the universities would have a closer working relationship with employers. Having educated these graduates, they would have more intimate knowledge of an individual’s educational record. That would allow them to make better referrals to positions and assure employers of obtaining more honest and accurate information. Fourth, as a temp agency, they could provide workers to businesses on a basis which is relatively free of risk. If a particular worker did not perform adequately, the university would take him back and furnish a replacement. The discharged person could go back to school for additional training or counseling and be made ready for a new assignment. Fifth, they could free businesses of handling functions such as payrolls and benefits. They could provide specialized career training.
In some respects, such a university might also assume the function of a labor union. While forswearing strikes and other contentious actions, the educational association might represent its members in their contractual relationships with employers and in formulating career plans. It might become a political force to change attitudes and promote the betterment of working people. The association might not object if employers wished to hire their graduates as employees to lock them into a more permanent relationship. Having employment responsibilities, this university would be in close touch with employers and thus be able to design courses which meet real occupational needs. The emerging practice of lifelong learning and career change would be handled with ease. Pension and vacation benefits, accrued by years of service, would become portable. Even so, this would not be a trade school or an extension of corporate training but a university. Its purpose would be to give status, place, and security to men and women currently set adrift in a sea of shifting employment requirements.
An educational institution has several functions. First and foremost would be the teaching of basic skills. Reading, writing, and arithmetic - the 3 Rs - comprise the core of skills to be taught at the elementary-school level. Such additional skills as typing, use of computers and calculating machines, personal grooming, diet and health, athletics, and public speaking are also appropriate subjects to be taught in schools. A second function, which tends to be reserved for higher levels of schooling, has to do with transmitting the community’s cultural heritage. Subjects which fall into this category include history, literature, languages, science, music and art, philosophy, and law. College curricula have traditionally focused upon them. Beyond this, there is an area of learning which has to do with the soul of the institution and its individual students. It would include subjects related to personal values. Religion, ethics, and social consciousness would fall into this category. Education generally instills an appreciation for truth. Persons who are sincere and accomplished seekers of truth find an honorable place within this community. Financial endowments are established to support their labors. Hierarchies distinguish the truthseekers’ accomplishments at various levels.
Beyond this, there are values of a more personal nature. The guiding principle of a values curriculum is that life’s most important values are self-chosen. No educator “educates” students in what they should believe or esteem. On the other hand, a common value of any community must be to respect the boundaries of others. There should be a consensus to respect other peoples’ opinions even while perhaps disagreeing with them. Another value might be to encourage each person to find a basis of self-pride. It would encourage individuals to take pride in their race, gender, religion, nationality, and social background, whatever it might be, and would give them resources to deepen their understanding of such matters. Religious teachings, if presented in an unobtrusive and respectful manner, would find acceptance in this type of pluralistic community. Values curricula should teach how to fight fairly, how to be polite, how to assert one’s own legitimate interest but give way to others’ legitimate concerns. It is not that people should be taught not to be selfish but that they should learn to recognize their own selfishness and accept certain limits.
The Quest of Self-Definition
Education in the computer age can take a cue from Socrates’ instruction: Know thyself. People today are quite interested in knowing who they are. That desire is manifested in the popular interest in astrological signs which are said to be associated with certain character traits. Knowledge of personal self-identity is an object of psychological testing. Such knowledge presumably helps them to pick suitable roles in real life. Direct marketers analyze customer data to develop profiles of persons likely to buy certain products. The police identify criminal suspects through profiles of persons likely to commit crimes. Each of us is on somebody’s list as exhibiting a type of personality that fits in with some exploitative purpose. But people are also interested in simply knowing who they are. If one has a reasonable self-pride, there is a joy in looking into the mirror and seeing one’s own reflection.
Human personality is such a fragile construct that abstractions purporting to describe it hardly belong to the realm of serious science. Psychological testing may involve the use of concepts that seem stupid or contrived. Methods used to compile test scores may themselves dictate the result. Moreover, persons taking the psychological tests may not have a clear idea of their own position with respect to some of the questions; and the quality of information obtained from such testing cannot rise above what was contributed. It may be that, in the future, psychological tests will need to involve more active participation on the part of the people being tested. The test subjects will need to search their hearts and minds to decide what type of information best indicates their own personality. It may be, for instance, that they know they like a particular song, or identify quite strongly with a particular historical figure, or are interested in a hobby. Whatever resonates most clearly within them can be a guide to constructing self-portraits.
Education can facilitate definitions of personal self-identity. With the aid of computers and other tools, it can bring together groups of like-minded individuals to explore their self-chosen ways. Any commitment of time to do or learn something pushes a person in a particular direction to become something. Individuals often want to become what they can do well and for which they may have received personal recognition. Tests, contests, and competitions, which celebrate the winners, can build self-esteem. Too often, however, education functions in a rejection mode. The fear of becoming a failure drives children to learn too much too fast. This pressure-cooker education might make sense if successful career work depended on cramming as much knowledge as possible into one’s head, but not if the goal is self-discovery. For that, a person needs a solid background of childhood experience where individuality runs free. If young people are to discover where their true interests lie, they must be allowed room space to flounder and make their own mistakes. Because the impetus for self-discovery comes from within, there is something to be said for slack which loosens the reins of direction and lets children be.
The quest of self-identity presents a challenge which can be met by a sampling of experiences or by dramatic role-playing. Some virtual-reality games have prepared children to project themselves imaginatively into roles. They become lifelike characters interacting with one another on the screen. Multimedia computers make it easy to escape to a personal fantasy world which, though artificial, offers real opportunities for learning. Even without the technology, schools should be a place to experience variety before one is forced to narrow down to a specialty. It is said that success in life involves, in baseball parlance, “going with one’s best pitch,” and doing this with some frequency. That makes it all the more important to discover from varied experience what are one’s strengths and where one’s true interests lie.
The Possibility of Catastrophe
It is possible that CivV might not turn out as this or any prediction would anticipate. The thrust of world history to date has been progress defined in terms of growth in human populations, advancing technologies, and increased wealth and knowledge. That could change if events took an unfortunate turn. For instance, even though the Cold War has ended, humanity could still be obliterated by a nuclear explosion. More than twenty-five nations are developing nuclear, biological, or chemical weapons and have ways to deliver them. Even if national governments kept the problem under control, crime syndicates or terrorist groups could acquire these weapons. Problems such as overpopulation and environmental pollution also hang over the future. The “greenhouse effect”, a consequence of industrialization, could heat the earth’s temperature to the point of turning habitable areas into desert while submerging others in oceanic waters. Depletion of the ozone layer could expose the human population to dangerous radiation. Raw materials upon which our way of life depends could be significantly depleted.
Technology, beneficial in the short term, has produced some disturbing long-term consequences. Medicine has brought humane relief to sick people, but it has also kept alive many persons who might otherwise have died. Their survival brings the possible birth of offspring with similar characteristics. Nature’s way of flushing out genetic weaknesses within the human population is thereby nullified. Over the long term there would appear to be a race between advancing medical technology and naturally deteriorating health. A related problem is that the frequent treatment of illness with antibiotic drugs may have served to create new strains of drug-resistant viruses and germs. Some patients who are prescribed these drugs fail to take the full dose of medicine to knock out the germs. As a result, the hardiest germs survive and multiply, making the next round of illness even more difficult to treat. A strain of infection discovered recently in Japan has become resistant to vancomycin, medicine’s drug of last resort. A plague based on any of these hardy microbes would be devastating.
Perverse incentives in our society also work to reduce the level of human intelligence. An advanced education, presumably reserved for the community’s most intelligent persons, serves to keep intelligent young men and women from marrying at an early age. Women postpone marriage and childbearing as society encourages them to become educated and pursue a career. If the more intelligent individuals are systematically held back from contributing to the gene pool while those less intelligent breed freely, average intelligence will decline over time. The same effect is achieved by methods traditionally used to select men for military service. National governments have drafted the healthy men and rejected those with mental or physical problems. Why society would want its healthier specimens to become cannon fodder while sparing the less healthy so that they might become the next generation of parents is incomprehensible.
Foreseeably, the deterioration of human health and intelligence will continue, though perhaps at a pace too slow to have a visible impact. Advances in technological knowledge may soften the impact somewhat. But, as technology becomes more complex, human intelligence would meanwhile be weakening. Civilization may be on a collision course between these two trends. Worldly affairs may become so complicated that the intelligence to solve problems is overwhelmed. That, at least, is how humanity in its natural state might face the future. But humanity is not in a natural state; it has machines to assist in overcoming difficulties which nature has imposed. That is a new and historic factor which affects life on this planet. If, in a pessimistic scenario, human populations are ravaged by thermonuclear radiation or deadly disease, the earth might be inherited by insects, bacteria, or another type of organism. Alternatively, the next dominant species might be a race of intelligent machines.
Once before, the earth experienced massive poisoning by a species inhabiting its surface. About three billion years ago, single-cell plants which had developed a capacity for photosynthesis “polluted” the environment by releasing free oxygen into the air. New forms of animal life evolved which metabolized this oxygen and created the balance of atmospheric gases that we have today. Now, in the last two hundred years, humanity has fast been turning the earth into a waste dump. Either living species will appear which feed upon this waste, or intelligent organisms or machines immune to its deadly influence will. A possibility to be considered is that computers may lead the way to this new age. Computers may allow man to reinvent himself by altering his own DNA. Alternatively, man may himself become fused with machines.
The Frankenstein Civilization
Medieval Jews living in an oppressive society entertained themselves with thoughts of the “golem”, a creature made of clay to which rabbis had given life by uttering a charm. Rabbi Low in 16th century Prague was said to have made one, but he was forced to destroy the creature when it ran wild. Mary Wollstonecraft Shelley, wife of the English poet, wrote a book, Frankenstein, in which a German student who knew the secret of infusing inanimate materials with life created an artificial being. This monster turned on its creator and destroyed him. In 1921, a Czech dramatist, Karel Capek, wrote a play about a machine which tirelessly performed labor. The term, “robot”, comes from that play. And so the fifth epoch begins with myths or literary works that imagine the creation of artificial human-like beings. One can call it the “Frankenstein civilization”. The title suggests that man will artificially re-create himself. This could happen in several ways: Man might create a machine version of his own mind. He might re-make his own body through genetic engineering. He might create an artificial environment in which his life functions can take place.
If this epoch has a supreme moment, it will be when humanity first encounters an alien creature whose level of intelligence equals or surpasses its own. Our first thought is that humanity might encounter space aliens who came to earth in flying saucers. That is because our historical imaginations tend to be conservative; we project past experiences upon the future. Once before, at the beginning of the third epoch, Europeans encountered a previously unknown but equally intelligent race of beings. Arawak Indians greeted Columbus and his crew when they first set foot on American soil. However, this event represented merely the reconnection of two human tribes which had been separated for thousands of years. Creatures from outer space, if such exist, would likely be of a different chemical makeup. Their intelligence could be of an entirely different order. The best known evidence that these creatures exist concerns an incident that took place in 1947, when an unidentified metallic object dropped from the skies into a field near Roswell, New Mexico. Since then, numerous people have reported observations of unidentified flying objects (UFOs), flying saucers, and the like. Spokesmen for the U.S. Government have denied knowing anything about this.
Popular interest in space aliens and explorations of outer space remains strong. However, it centers in a “Star Trek” image of space travel which is backward-looking. This is the old model of sailors (acting in this case a bit like Greek philosophers) who travel the cosmic oceans in a large boat, encountering strange peoples along the way. Predictably, the human experience of space travel will be an encounter with the very large and the very slow. Except in the sphere of intra-human communications, events will not happen at a normal pace. Distances measured in light-years are beyond the range of travel in a conventional sense. Before human beings can move even a small fraction of that distance, they may have evolved into some other creature or had meaningful encounters with fast-reproducing microbes. Human life in space will likely be confined to the narrow range of places within the solar system which have low gravity and the raw materials to fashion an environment resembling conditions on earth. Scientists suspect that the planet Mars, several moons of Saturn or Jupiter, and perhaps the earth’s own Moon might be suitable sites to support human colonies.
Like currents in the ocean, space travel will be affected by the gravitational configuration of celestial bodies. The locations of magnetic fields and radiation belts will also be important to human travelers. But the main question may be whether travel is even needed. Most travel is undertaken for the sake of delivering something or learning something. If the travel is intended to gather information about distant places, this can be obtained more cheaply by unmanned space probes than by human exploration. Machines which can endure conditions intolerable to man can communicate information to earth as well as human travelers can. Moreover, they do not mind one-way trips. Increasingly, communication would be taking the place of transportation. The human experience in space mainly would consist of communicating with other human beings. Human travel might involve the transportation of genetic materials for purposes of colonization or survival, to reduce the possibility that our species will become extinct.
It is unlikely that humanity will be able to survive in outer space unless machines mediate between us and nature. Machines, which Daniel Boorstin has called “the fourth kingdom” (after minerals, plants, and animals), will comprise a large part of our future whichever way history turns. Machines are today considered to be tools, subservient to the purposes of their human creator. Intelligent machines could change that model. When we imagine that moment in the fifth historical epoch when humanity meets an equally intelligent being, a scenario other than meeting space aliens is that humanity will have a significant encounter with intelligence born of his own technology. Perhaps, in a small way, that moment may already have arrived when, in May 1996, the “Big Blue” chess-playing computer developed by IBM beat Gary Kasparov, the world’s top-rated human chess player, in a tournament. Computers have an advantage over human intelligence in the accuracy and speed of performing certain calculations and in their ability to preserve memory and function in a harsh environment. The human brain yet retains a huge advantage in overall processing capacity. However, computer technology is rapidly improving while the brain is stuck in a slow evolutionary drift.
The goal of some computer designers is to create a machine which thinks like a human being. They approach this challenge by simulating processes believed to take place within the brain during thought. Some psychologists believe that passing signals between two neurons strengthens the synaptic connection between those neurons. Thoughts are actually patterns of strength between particular connections. Computer engineers have written algorithms to simulate learning according to this process. Computers have plotted neural positions on a map which correspond to the brain’s synaptic connections. The visual patterns bear an eerie resemblance to patterns observed on the surface of a monkey’s brain when it processes sensory experience. The human brain has over one trillion neurons which may each connect with a thousand other neurons. It is, then, no easy task to simulate brain activity by keeping track of these connections and manipulating them in various ways.
Scientists have also played Frankenstein by studying and manipulating the chemical basis of human life. Its essential structure is formed by information stored in the DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) molecules. These complex molecules contain two intertwining strands of genetic materials, each like a string of pearls. The “pearls”, or nucleotides, are each made of a particular phosphate group and have a ringed structure called a “base”. The sequence of nucleotides carries genetic information to direct the production of proteins which create cells of a living organism. The Human Genome Project is attempting to map the entire sequence of elements in a human DNA molecule. There are about three billion bases strung together in this molecule. A supercomputer is needed to keep track of the enormous quantity of data. Geneticists who have compared the DNA molecules of several different animal species have found similarities which may indicate how particular types of thinking relate to physiological functions. Roughly 70% of a human gene is identical with that of a mouse. Within the human species itself, genes are identical to one part in a thousand.
As more knowledge is obtained about the information encoded in human genes, medical technicians can selectively intervene to alter or remove parts believed responsible for health problems. Though scientists are yet unable to create new life in a test tube, they have the knowledge to use procedures such as cloning to create living organisms from preexisting genetic materials. Conceivably genetic surgery will some day create an “improved” type of human being or a new species. Humanity, possessing finite knowledge, would then be “playing God”. Already scientific knowledge promises to deliver something like eternal life. Personal immortality of a sort has become available at a modest cost by storing frozen samples of a person’s genetic material with a firm called GeneLink. If one wishes to preserve the genetic blueprint of a dead relative, this firm will instruct funeral directors on procedures to take a sample of flesh, before it is too late, by swabbing the deceased’s mouth. Of course, personality would not come back to life unless the brain cells were preserved. However, the technology of cryogenics, which freezes the corpse so as to permit later revival, offers that.
The fifth civilization will take humanity into deeper and ever more dangerous realms of experience. One cannot now predict whether this civilization will mark the final phase of human existence or bring a further progression toward what will become a sixth and then, perhaps, a seventh or eighth epoch of world history. When the computer develops a mind of its own and, like a rebellious child, begins to go against its parents’ wishes, then we will know that the era of Frankenstein is upon us. The fact that computer software can develop a so-called “virus” suggests that man-made intelligence has taken on the qualities of an independent life form. Already, human beings are dependent on machinelike appliances to augment their sense of hearing, keep their hearts beating in a regular rhythm, and replace missing limbs. Many people need daily injections of psychotropic drugs to maintain emotional stability. Such medical developments suggest that humanity’s future may be to evolve into a man-machine hybrid.
World history will not end unless humanity ends. Progress and struggle will both continue. It may be that a part of the population will be Amish-like abstainers from medical treatment and their descendants may come to be valued as a pure and endangered species. Conflicts may then arise between the “artificial” and “natural” people. Or, it may be that elitist dictators will seize control of computer networks and threaten the mass of human populations in some way. On the other hand, the current trend towards fragmentation of experience may continue and confuse the historical picture. One would long for the heroic simplicity of the old days. At the juncture of man and machine, creative interactions will take place. Governments, religions, commerce, entertainment, and other yet unformed institutions will be around. Social hierarchies will continue to exist.
Speculations on Human Life in Outer Space
Man’s expected emigration into outer space has captured the popular imagination. By now human beings have walked on the surface of the moon and have photographed distant planets close up. We know it is possible for humanity to colonize space and that possibility excites us. For many, the idea of living in space is like a piece of blank paper upon which to project fantasies of an ideal world. In a more practical way, the colonization of space offers assurance of survival for the human species should nuclear catastrophe, pollution, disease, or another deadly peril ravage its earth-bound population. It promises new frontiers to be conquered, new human adventures, and new opportunities for exploiting a limitless territory filled with abundant materials. Who could fail to be moved by such a prospect?
Science fiction writers have responded imaginatively to the challenge of exploring outer space. The television series, Star Trek, has attracted a cult following. While most space explorations remain in the future, it does seem to me, though, that some of the fiction writers’ assumptions concerning their way of life may not be realistic. The human colony assembled on the star ship “Enterprise” consists of hyper-rational individuals who are organized in a military-like society and who spout philosophical profundities. This type of culture is not futuristic but hearkens back to Plato. Apart from the darkened, star-filled scene outside the portals, little about life on this space ship reflects conditions of living in outer space as they might reasonably be anticipated.
What are those conditions? The main purpose of life, then as now, will be to perpetuate the human species. Because outer space lacks a natural environment in which earth-formed life can survive, humanity will have to create that environment artificially. An air-filled chamber accessible to water and other necessary materials will have to be manufactured and transported to its permanent location. Unless this apparatus and the materials can be brought up from earth at a great cost in energy, they will have to be created from materials found in space that are available to man. That will require new technologies for converting the materials chemically and physically into a useful form.
Beyond this obvious need, the natural conditions of space will impose certain requirements upon human colonies. Confining ourselves to regions within the solar system, we realize that our natural environment, beyond the space chamber itself, would be characterized by locations in time and space and by velocities of motion relative to the sun, its planets, and their moons. A space station might, for instance, be orbiting one of the planets, it might be stationed on a planet or moon, or it might be traveling orbit-less through space. These represent different conditions with respect to gravity for which different expenditures of energy will be needed for travel. It is obvious that no human colony could be established on the surface of a large planet such as Jupiter because the heat, gaseous pressures, and gravity would immediately crush and disintegrate the space station and its living inhabitants.
As a rule, therefore, human colonies will be limited to places having a lower gravity, cooler temperatures, and less density of atmosphere than found on earth. Those particular conditions must be within the limits of human tolerance. As for intolerable conditions on the other side of comfort, it is expected that technology can provide a supplement through artificial gravity, heating, and pressurized air.
At first glance, it would seem that the territorial instinct, common to most animals on earth, will become obsolete as the human species moves into outer space. There is obviously enough “room” in that place to accommodate everyone’s wants and needs, Upon further reflection, however, one realizes that certain locations in space are a more desirable place to settle than certain others. Places close to sources of materials convertible into waster and other of life’s necessities, which also are safely habitable by man, might remain a focus of territorial struggle. Unless comets and asteroids can be mined for their materials, those places would be restricted to locations on the surface of smaller planets or their moons. The premier residence, predictably, would continue to be the earth’s surface.
Humanity would also be concerned with possibilities for interplanetary travel. Such travel would be easier for human beings residing in places of low gravity where a minimal expenditure of energy would launch the voyage. The particular locations and orbital velocities of space ships relative to those of the planets constitute special opportunities for “fast track” travel comparable to those of oceanic currents on earth.
The basic fact is that, when a moving body approaches a large mass such as a planet, it gains acceleration and speed without further expenditure of energy. As it passes and moves away from that mass, it decelerates and loses speed. The net effect is that the spatial distance is traversed in less time that if the mass had not been present. Substantial changes in the direction of motion are also possible. Interplanetary travel will therefore be governed by natural time tables for exploiting favorable configurations of the planets similar to that which allowed Voyagers 1 and 2 to visit four outer planets.
Fundamentally, the rhythm of life in outer space would not be congenial to human nature. Human beings have evolved on earth in accordance with periodic frequencies of activity appropriate to our level of magnitude. For millennia, representatives of our species were hunters and gatherers who had daily interaction with their prey. On earth, we expect things to happen at rates of speed to which we are able to react. In outer space, those spatial and temporal relationships would be greatly changed.
Generally, elevation to a higher level of spatial magnitude brings a corresponding reduction of temporal frequency. So,in inhabiting outer space, we would move into a realm of huge size with vast distances. Our bodies would become completely dwarfed in this environment. Temporally, our lives would be out of synch with the slow-moving events found there. Within the artificial space chamber we would be normally stimulated by interaction with other human beings; but, outside, nothing much would be happening. the human life cycle spans perhaps eighty years, but significant celestial events take much longer. As a result, life in space may seem uneventful and boring.
At this point in time, there is no evidence that intelligent life exists anywhere in the solar system except on earth. If we should encounter an extraterrestrial species more intelligent than man who would choose to have contact with us, it might involve grave risks for our species. Therefore, we must assume that man will be left alone with himself and his earthly plant and animal companions in an otherwise inanimate universe. Nevertheless, the intra-human contact offers real relief from the boredom that might otherwise be experienced by inhabitants of space colonies.
All across the solar system, we might anticipate centuries hence that they will be colonies in various places created and inhabited by human beings. Each colony, confined to its own space chamber, would develop a distinctive ecosystem based upon the particular varies of life, including microbes, brought to it from earth. In time, the genetic differences between creatures inhabiting these colonies may become too great to permit safe travel between them. Even so, they might maintain close communication with each other through safe and energy-efficient techniques of radio communication. Thus the human colonies would be in touch with each other culturally though not physically
Even if the immediate physical circumstances of life in outer space are slow-moving and boring a space colony would receive ample stimulation from all the other human colonies with which it was in touch, including, hopefully, the mother colony on earth. Each colony would have its own set of historical experiences to share with others. Human beings in each colony would continue to make scientific discoveries which could also be shared. This the increase in historical experience and scientific knowledge would itself provide a steady stream of events to keep the inhabitants of the different colonies mentally occupied The proliferation of human colonies would make possible an enormous variety of experiences and points of view of which all, potentially, might be aware.
The human race would meanwhile be receiving much expanded and improved information about events in the natural universe. The larger an more advanced instruments of astronomical observation, unimpeded by distorting gaseous atmosphere, would permit more accurate readings of objects or events within the solar system and beyond. computer processing would “speed up: events so that conclusions might be seen from fragmentary evidence.
It would no doubt, become a policy question for each colony how much private information should be shared with the other human colonies. Some would adopt an “isolationist” stance, preferring to develop their own pure form of culture, while others would be more open and generous with their neighbors. Some might wish to exploit scientific discoveries to gain a power advantage while others would unselfishly share this knowledge. Humanity as a whole might develop a sense of history recognizing both the common past of all human colonies and their divergent futures.
The time in which we now live might be recognized by all as a unique time of convergence, when all the earth’s people became culturally and genetically unified as inhabitants of the same planet, just before their dispersal into space. Once physically scattered, their natures would inevitably and irretrievably diverge.
In the end, humanity would be left with two prized possessions, its genes and its culture. Both roughly correspond to what the ancient Greek philosophers called “form”. All about the natural universe would be masses of gases and solid debris representing that those philosophers called “material”. For human life to be possible and for its forms to have meaning, such materials would have to be systematically converted into water, oxygen, food, and other necessities. The idea of form stamping its image upon worldly material, conceived in the 5th century B.C., might continue to be a theme of importance to our space-inhabiting descendants.
The above speculation assumes, of course, that the story of space colonization will have a beginning. This has scarcely happened yet. Humanity missed a great opportunity in the 1970s, when after the first expeditions into space and onto the moon, it lost interest in space exploration and valuable time was lost. Our material treasure was squandered on wars, individualized travel, and other luxuries that extracted precious minerals and consumed limited energy resources. Space-related projects were more concerned with gaining knowledge than preparing ourselves to live in the inhospitable environment of outer space.
To take the first step of creating a new home for our species, I would propose that humanity establish a permanent colony on the earth’s moon and that the chief occupation of that colony be to build an industrial complex that converts materials found on the lunar surface into materials needed for human habitation. We will need clean air and clean water and a variety of other chemicals. We will need a sustained source of energy that can be obtained in outer space. We will need air-tight compartments in which human beings can live. These must all be built with materials found on the moon or with materials that can easily be transported to that place.
It will be an immense technological challenge to manufacture all these products away from other earth with resources found in that extraterrestrial environment. Yet, humanity needs to do that, because the clock is ticking on our own foolishness. We need a hedge against human extinction, with our eggs in more than one basket.
A SHORT HISTORY OF CULTURAL TECHNOLOGIES
How Writing Began
Most historians agree that the Sumerian people of ancient Mesopotamia were first to develop the technology of writing. They made that cultural invention during the 4th millennium B.C. Egyptian hieroglyphic writing, though roughly contemporaneous, is believed to have been derived from the Sumerian script some time later because it appeared suddenly in a developed form. The Sumerians and Egyptians used a script based on pictorial symbols, later mixed with phonetic elements. Three other peoples - the pre-Aryan peoples of the Indus Valley, the Chinese, and Mayans of Central America - seem to have developed pictorial and phonetic writing more or less independently. The as yet undeciphered script of the Harappan civilization was used by people in northwest India during the 3rd millennium B.C. The oldest Chinese inscriptions date back to the 14th century B.C. The Mayan people invented their system of writing at some time before the 3rd century A.D. Crude pictures and mnemonic devices such as knotted ropes or notched sticks predate the use of written language. The pictures became simplified and stylized, and then associated with ideas. Then came phonetic associations with speech.
The Sumerians were a commercial people and writing was developed to serve that end. Sumerian merchants and traders needed to record quantities of goods. They used baked clay tokens of a distinctive shape, two to three centimeters in length, to represent quantities of commodities such as grain, livestock, labor, and land. Each token represented both a quantity and type of commodity. For example , a “ban” ( 6 liters) of wheat required a different kind of token than a “bariga” (36 liters) of wheat, or a ban of barley, or a jar of oil. There were 200 different kinds of tokens in common use. Accountants placed these tokens inside a bowl or pouch. Later, they put the tokens inside sealed clay envelopes to increase security. To be able to tell what was inside, the accountants marked the outside of the envelopes. One marking identified the owner and another represented the tokens that were held inside. Before long, the Sumerian merchants realized that it was unnecessary to place actual tokens inside the envelopes; the external markings were a sufficient record. Having dispensed with hollow envelopes, Sumerian scribes began to use clay tablets laid on their backs as a writing material. This medium took an inscription from pressing a straight-edged stylus made of reed or bone into wet clay before it was baked.
Accountants in the Middle East had been using baked clay tokens for centuries before they made several conceptual changes that transformed this system of commercial recording into written language. The breakthrough came in separating quantities from commodities. The token for a ban of wheat was made to symbolize the number one. The token for a bariga of wheat was made to symbolize the number ten. Abstract numbers were now isolated from the quantity-commodity compounds. The next step was to place the numerical symbol next to a symbol representing another type of commodity. For instance, the token for a bariga of wheat (meaning ten) might be impressed next to a token representing a jar of oil. This combination of symbols could represent either a bariga of wheat plus a jar of oil or ten jars of oil. The Sumerians overcame this confusion by representing the jar of oil with a special symbol that was cut into the clay with a stylus when it was meant to accompany a number. Once symbolic incisions had replaced the baked-clay tokens, it became possible to employ a much larger number of symbols for both numbers and words. Each pictorial symbol represented a numerical or verbal concept.
Originally, the incised symbols were pictograms or ideographic representations of physical objects. Their own linear image presents the shape of the represented object. For example, the symbol for sun might be a circle with a dot in the middle. The hieroglyphic symbol of an eye was two concave horizontal lines with a half circle hanging down from the top - i.e., the drawing of an eye. While pictograms can express natural objects, they are less able to represent abstract concepts, proper names, or parts of speech such as pronouns, conjunctions, and prepositions. The next step, then, was to express such words through association with one or more ideograms that had a natural reference. For example, the picture of an eye with dropping tears has been used to express the idea of sorrow. A circle representing the sun can also mean day because each day starts with a sun rise. Sometimes several pictographic signs were combined to create a new ideogram. The Chinese character for “word” is a combination of characters representing the mouth and vapor. The Sumerian symbols of a woman and a mountain used together represented a female slave. That is because slaves in Mesopotamia customarily came from tribal peoples living in the surrounding mountainous region.
Another approach was to associate pictorial symbols with abstract words which had the same spoken sound as a word that could be visually represented. In other words, an ideogram could represent both a word of concrete reference and its homonym. For example, the symbol for the number four (4) might represent the preposition “for” or, perhaps, “fore” as in “foresight”. The reference to syllabic sounds - e.g., the “fore” in “foresight” - offered a means of extending ideographic writing to words that could not be visualized. Sumerian speech contained many polysyllabic words with short syllables found in other words. That condition favored use of a technique known as “rebus writing”. A rebus is a multi-syllable word with pictographs for each syllable. For example, the name of a well-known palace, “Buckingham”, contains three syllables: buck, king, and ham. Three pictographs representing a male deer, a monarch, and porcine meat would be its rebus symbol. Another kind of symbol, called a “determinative”, helped to distinguish between words that have the same sound but different meanings. For example, the Sumerians used the same spoken word, “ti”, to mean both an arrow and life. If a V lying on its side (>) represents the arrow and the determinative sign is an apostrophe, the word for life might be written: >’
Most systems of ideographic writing are heavy with homonymic references. Chinese speech consists entirely of monosyllabic words. A syllabic sound can have ten or more different meanings. Spoken Chinese extends its range of meaning through tone and context. The meaning of a word depends upon the musical tone or pitch in the speaker’s voice and its position in sentences. The large number of homonyms in spoken Chinese makes it easy to apply pictorial symbols to abstract words. Nine-tenths of Chinese characters have been created from phonetic associations with the words of spoken language. Often determinants are added to the ideograms to avoid confusion. Many phonetic symbols found in modern Chinese writing reflect the sounds of long-forgotten speech. This script has changed little in more than two thousand years as spoken dialects have come and gone. Modern Chinese writing, like ancient Sumerian script, represents, in Toynbee’s words, “an illogical and clumsy use of ideograms and phonemes side by side.”
In the case of Sumerian writing, the phonetic base of words was complicated by the fact that the Akkadian conquerors of Sumer grafted their own spoken language upon Sumerian script. While words written in this script meant the same thing in both Akkadian and Sumerian speech, Akkadian speakers could not longer recognize the homonymic associations. Their script included a mixture of ideographic words and words representing Sumerian syllabic symbols. For instance, the cuneiform symbol for mouth was pronounced “ka” in Sumerian and “pum” in Akkadian. When appearing with a determinative, however, this symbol referred to the syllabic sound “ka”. This dual system of writing meant that nearly every sign had several different pronunciations and meanings. To resolve the ambiguities, the Akkadians used determinative signs to indicate classes of objects as well as phonetic values. Eventually they moved towards a type of syllabic writing in which sixty written symbols represented the syllables of all words in Akkadian speech. The syllables each contained a single sound with a particular consonant and vowel mix.
A purely phonetic script disengages pictorial elements from the idea content of words. The written symbols instead stand for sounds in spoken words. This type of script can be either syllabic or alphabetic. With syllabic writing, each symbol represents the sound of a syllable. For example, the word “syllable” itself has three syllables sounding like “sill”, “ah”, “bull”. The three sounds would each be represented by a pictorial symbol. They would be positioned in the same sequence as in the spoken word. The other possibility is alphabetic writing. Here the written symbols represent the pure elements of sounds in speech. These sounds correspond to letters of the alphabet. Syllabic writing represents an intermediate stage between ideographic and alphabetic writing. The Japanese have two syllabic scripts which were adapted from Chinese in the 8th or 9th centuries A.D. One, the kata kana, is used for formal documents and scholarly works. It has about 50 written symbols and may not be strictly syllabic. The other type, hira gana, is found in newspapers and popular literature. There are about 300 symbols in this syllabary, but only 100 in common use. The script developed for the Korean language is another example of syllabic writing.
Diffusion of Ideographic Writing
Most ancient peoples had a “transitional” script which was in the process of evolving from an ideographic or mixed system into syllabic or alphabetic writing. In addition to ideograms, Egyptian hieroglyphic writing contained phonetic symbols representing the consonant root of words. Because the 24 consonant signs covered the entire range of Egyptian speech, some see this as a prototype of alphabetic writing. From the beginning, the pictorially elaborate hieroglyphic writing was accompanied by a short-hand or cursive script known as hieratic writing which priests used for correspondence. A later version, demotic writing, was developed for popular use. The Minoan society on Crete borrowed its still undeciphered “Linear-A” script from the Egyptian and Sumero-Akkadian civilizations during the 17th century B.C. Mycenaean Greeks who seized Crete around 1450 B.C. developed their “Linear-B” syllabic script in imitation of the Minoan. The Assyrians simplified Sumerian cuneiform script, reducing it to 570 symbols of which 300 were frequently used. The early Persian cuneiform script, influenced by the Aramaic alphabet, consisted of 41 mostly phonetic symbols. Chinese writing, while ideographic, is also a transitional script.
The scribes of Sumero-Akkadian society produced clay tablets recording commercial transactions and other types of messages. Over half a million such tablets have been found. The strokes cut with a stylus were thicker on one end than the other, so that they resembled a triangular sliver or wedge. Scribes imprinted the wedge-shaped or “cuneiform” messages in horizontal rows, moving from left to right. That kind of writing spread from Mesopotamia to neighboring lands whose peoples adapted cuneiform writing to their own spoken language. Sumerian script expressing Akkadian speech became an international language during the 2nd millennium B.C. Even Egyptian pharaohs used it when communicating with rulers of their satellite states in Syria and Palestine. Hammurabi, a Babylonian king who compiled a famous code of laws, simplified this script in the 18th century, B.C. His reign saw important advances in mathematics, astronomy, banking, and other areas. Sumero-Akkadian-Babylonian civilization continued to dominate the cultural and commercial life of the Middle East long after this empire disappeared. Cuneiform writing began to disappear in the 5th century B.C. as the spoken Babylonian language fell into disuse.
The ancient Sumerian script or its Babylonian derivative inspired the written languages of the Hittites, Elamites, Kassites, Assyrians, and other Middle Eastern peoples. While written Chinese shows a certain structural similarity with the Sumerian script, evidence of direct influence in this case is less convincing. There are, for instance, no signs had in common by the two scripts. Chinese tradition attributes the invention of writing to two “gods”, Ts’ang Chieh and Chü Sung, who were secretaries to Huang-ti, a legendary emperor of the 3rd millennium B.C. Ts’ang Chieh invented a set of diagrams used in divination, called “pa kua”, consisting of three broken or unbroken horizontal lines that represented basic elements of nature. Chu Sung invented a system of knots to aid memory. These two inventions, plus hand gestures, tally-sticks, and ritual symbols, may have developed into the early Chinese characters during the first half of the 2nd millennium B.C. The ta chuan or “great seal” characters appear in a book written in the 9th century B.C. The “small seal” or hsiao chuan characters were introduced by Li Ssu, a minister of the first Ch’in emperor in the 3rd century B.C. The li shu, a simpler script developed then to draft documents related to prisoners, is the prototype for most modern Chinese scripts.
When the Spanish conquistadors entered Mexico in 1519 A.D., they found that the Aztecs had an ideographic script which was used mainly for religious purposes. Archbishop Zumarraga ordered most of the “devilish scrolls” destroyed. Aztec writing was highly pictographic but had some phonetic elements. The Spaniards found in the jungles of the Yucatán peninsula and elsewhere evidences of the still older Mayan civilization, which had flourished in the first half of the 1st millennium A.D. The Mayans, too, had an ideographic script, which, in its use of cartouches, resembles Egyptian hieroglyphics. Intolerant Christian priests again destroyed manuscripts written in this language. Today, only fourteen Aztec and three Mayan manuscripts remain. The writing has been only partially deciphered. Most is known about the Mayan and Aztec calendars and numerical systems. The Aztecs and Toltecs probably derived their scripts from the Mayans. The origin of the Mayan script is unknown. A superficial comparison of scripts may suggest contact with the ancient civilization of Egypt - as ideographic inscriptions on Easter Island suggest contact with the Indus Valley civilization - but such explanations are speculative.
Linguistic scholars made rapid progress in deciphering ancient scripts during the 19th Century. In addition to Egyptian hieroglyphics, the knowledge of several cuneiform scripts was revived. They include the early Persian, neo-Elamite, Babylonian, and Sumerian languages. (It is ironic that the Mayan script remains undeciphered since it continued to be understood until the late 17th century.) The choice of writing medium affects the quantity of ancient documents available. While the Sumerians wrote on clay tablets, Egyptian scribes preferred papyrus, a paper-like material made from the stalks of plants that grew in the Nile delta. A technique of writing on parchment, or the untanned skins of animals, was developed at Pergamon in Turkey. Diviners of the Shang dynasty in China inscribed their prophecies on bones and tortoise shells. Some of the most durable writings are inscriptions in stone monuments. Darius I of Persia ordered a proclamation to be carved in three languages on a stone-faced cliff in Behistan. The Indian emperor Asoka erected inscribed more than thirty-five stone slabs, or stele, to promote Buddhist teachings. A cache of more than one thousand baked-clay tablets and fragments which are five to six thousand years old has been found at Uruk in southern Iraq.
Alphabetic Writing
We have seen how the technique of expressing words in a visual form progressed from pictograms to ideographic writing including phonetic elements, and then to a syllabic script. Alphabetic writing is the final step in this process. The sounds within the syllables of spoken language are broken down into pure elements. The word “word”, for example, is spelled W-O-R-D. Each successive letter represents a sound heard sequentially when someone pronounces that word. The alphabet itself is a complete listing of the written letters. The Hebrews associated each alphabetic letter with the first sound of a word in their spoken language. The Greeks, from whom the English alphabet is derived, copied the Phoenician and Hebrew system of writing.
There is a faint pictorial reference in this lettering scheme, as Richard Hathaway explains:
“A is alpha from the Hebrew aleph, meaning an ox head
B is beta from beth, the house
C and G are gamma from gimel, the camel
D is delta from daleth, the door
H is eta from kheth, the fence.
I and J are Greek iota from yod, the hand
K is kappa from kaph, the palm of the hand
M is mu from mem, the water
N is nu from nun, the fish
P is pi from pe, the mouth
R is rho from resh, the head
S is sigma from shin, the teeth.”
If you turn capital “A” upside down, you can still see the ox’s head with its horns sticking up, though the eyes and nostrils are omitted.
Both the Hebrews and Greeks also used alphabetic letters as numbers. The first nine letters of the alphabet represented the successive numerals from one to nine. The next nine were the numerals multiplied by ten: 10, 20, 30, 40, etc. This association has given rise to schemes found in the Jewish cabala and elsewhere which attach symbolic significance to the numerical total of letters in certain words, especially proper names. The Book of Revelation declares, for instance, that the number of the beast, a man’s name, will equal six hundred and sixty-six. An occult art of linguistic analysis and interpretation known as gematria studies ancient texts seeking mystical illumination from the numbers associated with words. The Romans also used letters to designate numbers but limited them to the following: I, V, X, L, C, D, and M. The modern scheme of numbers, divorced from alphabetic lettering, came from India via the Moslems. They are known as Arabic numerals.
We consider alphabetic writing to be more advanced than ideographic or syllabic writing because it achieves significant economies in the use of symbols to express words. Ideographic scripts require as many different visual symbols as there are words in a dictionary. For syllabic scripts, one might need several hundred symbols for the associated sounds. Alphabetic writing expresses the entire range of spoken language in between 20 and 30 letters. The alphabet allows each written word to be “sounded out” phonetically to discover its meaning. It is easy to learn the relatively small number of associations between letters and sounds. On the other hand, as John Logan has pointed out, there are “hidden lessons” which need to be learned in converting sounds into visual signs, coding and decoding information, and ordering words alphabetically. All things considered, it takes children in North America about as much time to learn to read and write in the English language as it does Chinese students to learn the 1,000 basic characters in their ideographic script. Both sets of students typically begin to study reading when they enter school at the age of five and have achieved literacy skills three years later.
The earliest alphabetic scripts did not run in a consistent direction. Some scripts were written in vertical columns. Some moved along horizontal lines. The Phoenician script was read horizontally from right to left. The Ethiopian and Greek scripts, in contrast, went from left to right. Some peoples’ writing followed the “boustrophedon” pattern, moving from right to left on one line and from left to right in the next. This term is derived from a Greek word which means “turning like oxen in ploughing.” Boustrophedon inscriptions are found on the walls of temples in southern Arabia. A person can read them while walking in one direction and then, at the end of the line, pick up the next line without having to walk back to a starting point. By the mid 11th century, B.C., alphabetic writing had become more stable. Most scripts settled on movement in a horizontal direction. Pictorial features gradually disappeared as the lettering became more stylized. Alphabets beginning with the Ugarit in the 14th century B.C. appeared with their letters arranged in a certain order. Our word “alphabet” comes from “alpha” and “beta”, which are the first two letters in the Greek alphabet.
Spread of Alphabetic Scripts
Alphabetic writing began with scripts invented by Semitic peoples inhabiting Syria, Palestine, and the Sinai peninsula during the 2nd millennium, B.C. Documents written in such scripts have been found at the Serabit el Khadem temple in the Sinai and at nearby copper mines which are dated to approximately 1500 B.C. Because certain of the alphabetic letters resemble symbols used by Egyptian scribes, some scholars suspect Egyptian influence. This “proto-Sinaitic” or “proto-Canaanite” writing is believed to be the ancestor of all alphabetic scripts. It followed an acrostic principle by which the sound of the first consonant in a word becomes the sound of the letter itself. For example, the pictorial symbol for dog might represent the letter “D”. Next, symbols representing the other consonants in the word were written in order of their respective sounds. The word “dog” might be spelled by placing symbols representing a dog, an owl, and a goat together in sequence, except that the early Semitic alphabets contained only consonants and no vowels. The proto-Sinaitic or proto-Canaanite alphabets had twenty-two letters for consonant sounds in their languages.
The original alphabet, the proto-Canaanite, evolved into the Phoenician and proto-Arabic alphabets around 1300 B.C. The Aramaic alphabet evolved from the same source at a later date. The proto-Arabic alphabet gave rise to scripts used in southern Arabia and Ethiopia. Phoenician writing, which is closely related to early Hebrew, passed its alphabetic system along to the Greeks. The Greeks may have received the Phoenician alphabet around 1050 B.C., although some historians believe that the transfer took place as late as in the 8th century B.C. Alphabetic writing was a kind of shorthand suited to the needs of merchants and traders. The mercantile class, more than any other, helped to spread this new technique. Two peoples, the Phoenicians and Aramaeans, were its principal carriers. The Phoenicians were ultimately the source of all alphabetic scripts adopted by nations west of Syria. Those used in Syria and places to the east were based on the Aramaic script.
The Phoenicians were a Semitic people given to commercial navigation. Their principal cities were Tyre and Sidon, in Asia Minor, and Carthage in North Africa. They were the first civilized people to set sail in the Atlantic ocean. The Phoenician script was widely used in the Mediterranean region for more than a millennium. Its derivatives include the scripts used in Phoenicia proper (Lebanon) and in colonies on the islands of Cyprus, Sardinia, Malta, and Sicily, as well as in the coastal cities of Marseilles and Carthage. The so-called “Punic” script was used in Carthage until the Romans destroyed this city-state in 146 B.C. Phoenician writing became extinct in the 3rd century A.D. Legend has it that the Greek alphabet was adapted from the Phoenician by Cadmus of Thebes, who lived in Phoenicia for many years. The Greek alphabet had both an eastern and western branch. The classical Greek alphabet, consisting of 24 letters, came from the eastern branch. In 403 B.C., the Ionic script used in Miletus was officially adopted in Athens. The other Greek city-states came around to this version during the next half century. From the western branch emerged the Etruscan and Roman alphabets, and, through them, most of the alphabetic systems associated with European languages.
When the Greeks acquired Phoenician writing, they made a modification to the alphabet which greatly increased its appeal. The Phoenician and other Semitic alphabets had consisted exclusively of consonant letters. Words in those languages were written with the consonant letters (usually three) forming their base. Sometimes an unstressed aspirant consonant, used like a vowel, would be added to resolve ambiguities. The Greeks converted the unstressed Hebrew letters aleph, hey, yod, ayin, and vav into vowels equivalent to a, e, i, o, and u. They also added two new vowels, eta (“a” as in fate) and omega (“o” as in open), and three new consonants found in Greek but not any of the Semitic languages. These were theta (“th”), phi (“ph” or “f”), and psi (“ps” as in lips). The Greek alphabet thus offered a complete selection of sounds spoken in that language, so that words might be written without ambiguity. The Latin alphabet contains most of the Greek letters but shortened them for convenience. Alpha became “a”, beta “b”, gamma “c”, etc. In addition, the Romans inserted a new letter “g” into the alphabet to replace “z”, and later reintroduced “y” and “z”.
Meanwhile, another family of alphabetic scripts was entering lands to the east. While the Phoenicians traded in ports bordering the Mediterranean sea, their Semitic cousins, the Aramaeans, brought merchandise overland along mideastern caravan routes. The Aramaean people, originally from northern Arabia, had settled in Syria during the 12th century, B.C. and established fortified towns, the most important of which was Damascus. That group of city-states came in conflict with the expanding Assyrian empire. Damascus fell in 732 B.C. To control conquered peoples, the Assyrians had a policy of removing them from their homeland and resettling them elsewhere in the empire. This cruel practice worked to the advantage of Aramaean culture. Aramaeans became the dominant traders within the Assyrian empire. Knowledge of their language spread. Aramaic writing had become the dominant script in the Middle East by the end of the 7th century, B.C. The Assyrians were conquered by the Medes and Babylonians, who were, in turn, conquered by the Persians. So influential was Aramaic writing by this time that it replaced cuneiform writing as the official script of the Achaemenian Persian empire.
Though it had existed since the 10th century B.C., Aramaic writing did not become historically important until after the Aramaean states in Syria ceased to exist. Then its commercial prominence gave it an advantage. Even after Alexander the Great officially dumped it in favor of Greek, Aramaic speech continued to be the vernacular language of most peoples living in the Middle East. Jesus, for instance, spoke this language. The Aramaic alphabet was the parent of several later scripts, including classical Hebrew, Nabataean-Sinaitic-Arabic, Palmyrene, Syriac-Nestorian, Mandaean, and Manichaean. Some were used by oriental Christian churches. The Arabic script, in which the Koran is written, developed from Nabataean writing at the end of the 4th century, A.D. Pahlavi, a Persian script developed in the 2nd century B.C., was used in the Parthian and Sasanian empires. A related alphabet known as Avesta is associated with Zoroastrian sacred literature. Aramaean traders also had contact with India, especially during the Persian occupancy of lands in the Indus Valley. Two Indian scripts of the 1st millennium B.C., Brahmi and Kharoshthi, are derivatives of Aramaic.
As trade follows the flag, so it is said that systems of alphabetic writing follow religions. The Latin alphabet, associated with the Roman Catholic church, is today the most widely used alphabet in the world. The Arabic alphabet, second most widely used, prevails in places where the Islamic religion is dominant. Syriac, an offshoot of Aramaic writing, was the script of Christians at Antioch. It split into two branches after the Council of Ephesus in 431 A.D. The Eastern branch became associated with Nestorian Christianity, and the Western branch with the Egyptian Coptics. The Nestorian script traveled east to India, China, and Central Asia through an active missionary corps, influencing the Sogdian and Uighur alphabetic languages. The Jacobite script, named after a Monophysite Christian bishop, was used in Syria, Egypt, and Abyssinia. There was also a Manichaean alphabet associated with the Manichaean religion. The later split between eastern and western Christianity brought a corresponding split in the use of alphabetic scripts. Those nations which embraced the Greek Orthodox faith also adopted the Cyrillic script. They include Bulgaria, Serbia, Russia, and the Ukraine. On the other hand, the Poles, Czechs, Croats, and Slovenes, who were Roman Catholics, adopted Latin-based scripts.
Modern Hebrew is more closely related to Aramaic writing than to the Hebrew script used in pre-exilic times. Likewise, the writing of the pre-Aryan civilization of the Indus Valley is unrelated to Kharoshthi or Brahmi. The emperor Asoka left over 35 stone inscriptions in these alphabetic scripts, promoting his political and religious (Buddhist) views. Brahmi, which may first have appeared in the 7th or 6th century B.C., was the script used by Brahman priests for writing in the ancient Sanskrit language. After the Mauryan empire disintegrated, this script acquired many regional variations. The Hindu revival beginning in the 1st century B.C. produced a sacred literature in Sanskrit. Buddhist and Jainist documents were written in vernacular languages, or “Prakrit”, especially the Pali dialect. The Gupta dynasty, which existed between the 4th and 6th centuries A.D., coincides with the golden age of Hindu culture. Its written language was a prototype for most Indian scripts, as well as those in Tibet, Ceylon, and other neighboring countries. The North Indian Nagari or Deva-nagari script, developed in the 7th century A.D., is the ancestor of Bengali, Kaithi, and other scripts. The South Indian Kanarese and Teluga scripts date from the 5th and 9th centuries A.D. respectively. The Grantha script of southeast India is the ancestor of Old Javanese and Khmer (Cambodian) writing.
Greek writing is the ancestor of all European alphabetic scripts. Classical Greek, based upon the Ionic alphabet, gave rise to cursive, uncial (large rounded letters), and, later, minuscule scripts in the opening centuries of the Christian era. From Greek uncial writing came two scripts used by Slavic peoples, Glagolitic and Cyrillic, both introduced by St. Cyril in the 9th century A.D. Western Greek writing was a model for the Etruscan and Latin scripts. The Etruscan people who controlled northern Italy between the 8th and 5th centuries, B.C. may have acquired an alphabetic script from Greek sources during the 8th century. The Romans developed their Latin alphabet in the following century. It was likely of Etruscan and Greek origin. The Greek colony of Cumae near Naples was a principal transfer point for passing the Greek alphabet to Italian peoples. Latin was, of course, the language of the Roman empire. As such, it spread far and wide. The modern scripts of Europe are adaptations of the Latin alphabet to European languages. To its Latin parent, the English alphabet added the letters J and U during the 17th and 18th centuries A.D., and the letter W during the middle ages. U and V were once the same letter, as were I and J. W, with an antecedent in the runic alphabet, is related to U and V.
Printing
Printing may have originated in the Sumerian use of cylinder seals to make impressions in clay. In China, religious pilgrims made ink rubbings of Buddhist texts that were inscribed in stone pillars. By the 6th century, A.D., Chinese engravers had mastered the art of wood-block printing. This involved a process of transferring inked writing from paper to a wood surface and then cutting away the uninked portions to leave the script in relief. To print, the cut wood block was inked and covered with a sheet of paper which was rubbed on the back with a brush. This technology helped to meet a demand for Buddhist and Taoist literature during the T’ang dynasty (618-906 A.D.). In the 11th century, A.D., a Chinese alchemist named Pi Sheng invented a method of printing with movable type. He fastened the type font to a metal plate with an amalgam of glue and clay that was baked to harden the attachment. The reusable font could later be removed by reheating the plate. A Chinese magistrate in the 14th century published a book on the history of technology which used more than 60,000 characters carved from wood. In the early 15th century, a Korean king ordered 100,000 pieces of type to be cast in bronze. Korea became the center of print technology until it spread to Europe later in the century.
Europe launched the print revolution rather than Asia because European alphabetic writing was better suited to the use of movable type than the ideographic Chinese or syllabic Korean or Japanese scripts. The relatively small number of alphabetic letters made it possible to cast reusable metal type in molds at a low cost. It is believed that Uighur Turks living in a region just west of China brought Asian typographical knowledge to the Moslems who then passed it along to the Europeans. Islamic society also gave Europe another technology which the Chinese had developed: paper manufacturing. Its invention may date to the 2nd century A.D. In 751 A.D., Moslems in Samarkand repelled an attack by Chinese soldiers and took some prisoners. Among them was a group of skilled papermakers. However, the Moslems did not themselves embrace a print culture because their religion would not allow the words of Allah to be reproduced artificially. (The Islamic ban on printing was not lifted until the 19th century.) Paper, which took print better than parchment, may have entered Europe during the 12th century from Moorish Spain or through Italian ports that had active trade relations with the Islamic world. Italy soon became a center of paper manufacturing and related arts.
The abundance of cheap paper fed a growing market for literature produced by manuscript copyists. There was a demand for Bibles, prayer books, and other religious literature. University students had need of scholarly texts produced by the stationarii. Works written in living or vernacular languages catered to popular interests. Dante’s Divine Comedy and Boccaccio’s Decameron pioneered that genre during the 14th century A.D. Approximately ten thousand copyists or scribes were employed in Europe to serve these various markets. Europeans began printing with wood blocks in the late 14th century. Initially, their purpose was to produce the large capital letters which began medieval texts. The engravers then included accompanying religious pictures and short passages of text. As their engraving skills improved, the quality of the lettering increased to the point that the text became more important than the ornamental features. Wood-block printers produced short books called “donats” in the early 15th century. A Dutch printer named Laurens Janzoon, also known as Koster, printed a book of prayers, titled Speculum Humanae Salvationis, in 1428 A.D., using wooden fonts. Printers soon preferred to use lead type for the letters because numerous castings could be produced from the same die and they were more durable than wood.
Historians generally credit the invention of printing in Europe to Johann Gutenberg of Mainz, Germany, who printed a Latin-language Bible using movable type and his own press. Gutenberg, a member of the goldsmiths’ guild in Strasbourg, began experimenting secretly with the new techniques in the 1430s while earning a living from cutting jewels and producing mirrors. However, the prolonged experiments cost money and Gutenberg was forced to borrow from friends and business associates to continue this work. In 1450, he borrowed 800 guilders from Johann Fust, a wealthy financier. He pledged his tools and printing equipment as collateral. Gutenberg completed production of the Mazarin Bible in 1454. Its printing brought together a number of technical innovations including a new kind of mold for casting type, a type-metal alloy, an improved press, and oil-based ink. Fust promptly filed a lawsuit against Gutenberg to recover his money. The court ordered Gutenberg to repay Fust’s loans plus compound interest. While sale of the printed Bibles would have amply covered this amount, Fust was allowed to seize the type for the Bible and a Psalter and some of Gutenberg’s printing equipment. With the help of a son-in-law who had been Gutenberg’s assistant, Fust himself set up shop as a printer.
Despite Fust’s claims to the contrary, Gutenberg belatedly received credit for inventions that launched the age of printing. He may not have been the first to print with movable type but he did perfect the chief elements needed to make this technology commercially successful. Gutenberg mass-produced reusable lead-alloy type fonts from soft-metal dies and a mold. He also developed his own handpress that allowed large sheets of paper to be printed. His printing press, adapted from a wine press, combined a fixed lower plate with an upper surface, or platen, that could be moved up or down by turning a small bar in the worm screw. The type font were individually arranged in lines along a wooden strip and locked in place. After printing, the fonts were disassembled and put back into the type case. Around 1475, steel dies replaced the bronze or brass dies used to produce the copper matrices. A sliding or rolling bed was introduced to allow the form to be withdrawn and reinked after each sheet was printed. Improvements in the worm screw allowed the platen to be raised and lowered more quickly and evenly. Eventually the wooden presses were replaced by ones made of metal. Rotary cylinders with revolving lines of type replaced stationary presses.
Johann Fust and his family became Europe’s first publisher. Fust sold printed Bibles in Paris at one fifth their normal price, causing panic among professional copyists. By the end of the 15th century, an estimated 20 million copies of 35,000 different books had been printed. Thanks to Gutenberg and his successors, common people could afford to own their own copies of the Bible. Printing presses churned out religious pamphlets that fed controversy between Protestant and Catholic partisans. William Tyndale, an Englishman who had visited Martin Luther in Wittenberg, produced his own English-language translation of the Bible. This offended England’s King Henry VIII. Tyndale was condemned of heresy and put to death. Two years later, Henry issued his own English-language Bible as a means of bolstering his authority after the rupture with Rome. The king put his own name and picture on a front page. A thoughtful group of 17th century Europeans, weary of religious hatred, began to study the natural world. Pierre Bayle’s scientific newsletter, Nouvelles de la République des Lettres, began publication in 1684. Improved postal services allowed individuals sharing a common interest to engage in regular correspondence. This led to printed newsletters and then to general-interest newspapers.
The great expansion of European printed literature and correspondence among scholarly individuals broke down barriers between religions or nations to create an international “Republic of Letters”. The Dutch humanist, Erasmus of Rotterdam, was first to take full advantage of the print technology. In 1516, he published a new Latin-language version of the New Testament based upon an original translation from the Greek. Erasmus is today better known for his witty commentaries. Like Voltaire, he had friends throughout Europe and used his contacts to promote intellectual and religious tolerance. The quickening interest in vernacular languages produced a crop of first-rate national poets such as William Shakespeare and John Milton, rivaling those who wrote in classical languages. Essayists such as Montaigne, dramatists such as Molière, and philosophers such as Descartes or John Locke, exploited possibilities of the print medium. French prose literature became a model for the European literary culture during the 17th century. It was crisp and precise, stating its themes in simple sentences rather than torturous aggregations of subordinate clauses.
Printing became a tool for organizing knowledge in Diderot’s Encyclopedia and Samuel Johnson’s Dictionary of the English Language. It helped to spread new political ideas as expressed in Thomas Paine’s Common Sense, the Declaration of Independence, and Declaration of the Rights of Man. Perhaps the most popular application of this technology in the early days was to produce almanacs for farmers, seamen, and others. These almanacs gave astrologically propitious times for planting crops or heading out to sea. They included other kinds of information in their filler space. Poor Richard’s Almanac is famous for its pithy sayings and advice for successful living. Besides publishing books, the early print shops reproduced government proclamations, ships’ manifests and bills of lading, popular ballads, and weekly newspapers. Printed pamphlets distributed in Germany encouraged people to emigrate to Pennsylvania. Handbills advertising products for sale lured customers into stores from off the street. The Sears catalog, introduced in 1896, was so successful in selling sewing machines and other products that it killed the general store. Many Americans, especially in rural areas, learned to read from this book.
Daily newspapers first appeared in Europe during the 18th century. The first daily newspaper in England, the Daily Courant, began publication in 1702. Noah Webster’s Minerva, begun in 1793, was New York’s first such publication. Many of the weekly papers were mouthpieces for political parties; however the future lay with mass-circulation newspapers. Cheap wood-pulp paper began to be used for printing newspapers in the 1860s. Photoengraving, lithography, and stereotypical printing made it possible to combine pictures or cartoons with the text, increasing reader interest. To boost circulation, Joseph Pulitzer’s New York World pioneered the use of large-type headlines, sections for comics and sports, and a Sunday supplement. It pitched content to flatter or interest the common man. Violent or sensational events became staples of news reporting. Technologically, power-driven rotary presses which printed on continuous rolls of paper helped to speed newspaper production. Ottmar Mergenthaler invented a linotype typesetting machine featuring a keyboard similar to a typewriter’s which was installed at the New York Tribune in 1886. Teletype printers took stories from the wire services.
Photography
Photography was the first in a series of cultural technologies which conveyed sensuous images rather than words. It began with the camera obscura, a device which projects light through a pin hole to produce an inverted image on the inside surface of a box or darkened room. Giovanni Battista della Porta discussed the concept in a book published in 1553. Johann Heinrich Schulze discovered in 1727 that exposure to sunlight darkens solutions of silver nitrate. In 1802, Sir Humphry Davy and Thomas Wedgwood produced visual “silhouettes” by placing objects on paper soaked in silver nitrate and then exposing it to light. A French chemist, Joseph Nicéphore Niepce, conducted experiments in transferring camera-obscura images to glass coated with silver chloride. In 1816, he printed on paper the world’s first photographic negative. A decade later, he imprinted a positive image on a metal plate. Niepce teamed up with another Frenchman, Louis Daguerre, to perfect this process. After Niepce’s death in 1833, Daguerre developed a method of producing positive images on silver plates. His “daguerreotypes” became a way to make inexpensive portraits.
The principle of black-and-white photography is that light which is focused upon a plate or paper surface coated with silver bromide leaves a visual pattern reflecting the degree of exposure in various places. A chemical reaction turning the silver-bromide crystals into silver occurs in spots more intensely exposed to light. Unexposed spots on the plate retain the silver-bromide coating. A negative is produced when the silver bromide is dissolved with sodium thiosulfate, leaving undissolved silver. The image from this negative projected upon photographic paper produces a positive, in which dark and light spots from the negative are reversed. To sharpen the image, the camera focuses incoming light upon the coated surface of the negative through a lens. A shutter opens and closes the diaphragm, controlling the amount of light to be admitted. Shutter speed and aperture width control the amount of light allowed to strike the film. Film sensitive to light of various colors produces negatives from which color prints can be made.
The daguerreotype portrait of the 1840s popularized the new technique of photography. The first such picture made in the United States required a half-hour exposure. Photographic techniques improved in subsequent decades as better lenses and more sensitive coatings were invented and as the wet collodion process was applied to photographic plates. Daguerreotype artists roamed the country on riverboats or in specially equipped cars making portraits in each town. In the 1860s, Matthew Brady and his assistants photographed scenes from the U.S. Civil War. George Eastman introduced roll film in 1888 which applied a gelatinous and chemical coating to paper. His later substitution of celluloid for the paper backing created film for the motion-picture industry. Color film first appeared in the 1930s. An MIT professor, Harold Edgerton, invented the electronic flash tube in 1938 to replace flash powder and bulbs. Photographic realism overtook the news profession in the 1930s and 1940s as newspapers and magazines increasingly used photographs to illustrate their stories.
The Telegraph
The electric telegraph began the modern age of telecommunications. The French physicist André Marie Ampère first had the idea of sending messages with electricity. His writings inspired an American painter and pioneer of photography, Samuel F.B. Morse, to experiment along those lines. In 1844, Morse gave a practical demonstration of an electric telegraph to members of the U.S. Congress. He sent the message, “What hath God wrought”, from Washington to Baltimore. This message was sent in “Morse code”, in which each letter of the alphabet corresponds to a set of dots and dashes - short or long buzzing sounds - produced by activating and relaxing an electric circuit. The telegraph depends upon an electric circuit in which a single copper wire forms one part of the circuit and the earth another. An electromagnet in the receiver alternately makes and breaks the circuit as electricity passes through the wire. Patterns of electrical engagement initiated at one end of the wire are received at the other end as audible sounds.
Morse’s invention accompanied the development of railroads during the 19th century. The telegraph machine allowed large military operations to be coordinated effectively from headquarters. Later enhancements allowed several different messages to be sent through the wires at the same time. In 1872, J.B. Stearns invented a “duplex” telegraphy system, which allowed two messages to be sent through the same set of wires. Thomas A. Edison, whose career began as a telegraph operator, invented a “quadruplex” system in 1874. Automatic telegraphy became available with the use of punched paper strips. A copper cable capable of carrying telegraphed messages between continents was laid across the North Atlantic ocean in 1866. By 1902, telegraphic cables, primarily owned by the British, crisscrossed most of the earth’s oceans and seas, including the Pacific. Then, suddenly, this wire-based technology became less important as radio communication appeared.
The Telephone
The telegraph, like ideographic writing, was a device for experts who had learned a specialized code. The next cultural invention, the telephone, was like alphabetic writing. Because the messages were delivered in spoken language, it became a means of popular expression. The invention of the telephone is attributed to Alexander Graham Bell, a Scottish-born teacher of deaf children then living in Boston. However, Elisha Gray invented a similar device about the same time. On March 10, 1876, Bell was working on his project in an attic workshop when he spilled sulfuric acid over his clothes. “Mr. Watson, come here, I want you,” he called to his assistant in the basement. Watson heard Bell’s voice coming from the wire. He rushed upstairs with great excitement to deliver the news. Later that year, Bell exhibited what he termed a “talking wire” at the Centennial Exhibition in Philadelphia. Emperor Dom Pedro of Brazil stopped by to see Bell’s exhibit. As Bell spoke into the transmitter, the emperor listened at the other end of the wire. “My God, it speaks!,” the emperor exclaimed. Bell’s invention became the hit of the exhibition.
The telephone operates according to the principle that sound waves emitted by the human voice can produce an electrical current whose impulse patterns express acoustical qualities in the originating speech. Bell’s invention consisted of a diaphragm - a thin plate of soft iron - which vibrated like an ear drum in waves of varying intensity and frequency. These vibrations affected the magnetic field of a nearby bar magnet, which, in turn, induced a current in wire wrapped around the bar. A receiver, at the other end, picked up the electric signals and converted them back into sound by a reverse process. The current received by this device created a fluctuating magnetic field which caused its diaphragm to vibrate in the same way as the transmitter’s. Thus, the same sound might be heard as was spoken at the other end of the wire. Within a year, Thomas Edison and two other Americans invented an improved transmitter, the microphone, which used grains of loosely packed carbon instead of bar magnets.
Today, more than three out of four U.S. households have telephones. Switchboard operations are highly automated. The telephone lines carry more than voice signals. Computer data can now be transmitted through these lines to distant computers. Written text can be transmitted between fax machines. The era of the video telephone may be approaching. To meet the greatly increased demand for images and information transmitted by telephone, communications companies have installed several coast-to-coast networks of fiber-optic cable during the past twenty years. Glass fibers carry information more efficiently in the form of light signals than electrical impulses through copper wire. Moreover, the technique of sending light down each strand of fiber in closely spaced wavelengths allows the cable to carry signals on many different channels, further increasing its carrying capacity. In addition, wave bands have been reserved for cellular phones, pagers, and personal communication devices exploiting the new wireless technologies. Individuals can place or take calls nearly anywhere. As telephone service is linked to computers and satellite transmissions, communications experts have suggested that “in the future, all roads lead to the telephone.”
Sound Recording
Thomas Edison, America’s best-known inventor, created the first phonograph machine in 1877. Working with an assistant, Edison sang “Mary had a little lamb” in a loud voice into a rotating cylinder covered with tin foil. The sound of his voice agitated a needle attached to the cylinder which produced quivering grooves in the tin foil. The cut grooves reproduced the original sound when a needle was later drawn across the rotating cylinder. Another inventor, Emile Berliner, brought out an improved version of the phonograph in 1888 which he called a “gramophone”. This was a flat disk with spiraling grooves of uniform depth and lateral variation. Its advantage was that an unlimited number of duplicates could be made from a matrix. Berliner sold his gramophone records from a mail-order catalog. By 1895, he had 100 different disks in the catalog, each with a four-minute recording of music taken from operas or John Philip Sousa marches. The range of frequencies was limited, and the sound quality erratic. This type of record was played on a turntable with a spring-driven motor that needed to be rewound each time. Wooden or steel needles ran in the grooves.
Sound recordings were a popular type of entertainment in the penny arcades of the 1890s. Edison manufactured a coin-operated machine which cost a penny to play. Electric record players offered improved convenience and sound quality. A crystal in the playing arm converted mechanical vibrations from the record into voltages that were fed into an audio amplifier. Voice patterns from the phonograph were then converted into electrical impulses which recreated the sound. As automatic record changers were developed and recordings improved, increasing numbers of phonograph records were sold to consumers reflecting the musical interests and styles of the times. The juke box, placed in bars and restaurants, became popular in the 1930s. “Top 40” lists of the most popular recorded songs were showcased on radio stations across the country. The 78 r.p.m. records gave way to 45 r.p.m. disks with single hits on each side, and to the longer-playing albums with multiple selections. Recorded music became an integral part of the fast-paced, youth-oriented American lifestyle.
Meanwhile, the technology was changing as more sound recordings were issued on tape. The technology of tape recording began with Valdemar Poulsen’s discovery in 1898 that a steel wire retains part of its magnetic flux when drawn across an induction coil in which electrical impulses from sound vibrations had created a fluctuating magnetic field. Poulsen, a Danish inventor, built a device called a “telegraphone” to capture and replay the magnetized sounds. In the 1930s, the German chemical companies IG Farben and AEG developed magnetic tape which offered better sound quality than wire. American scientists seized a few of their “magnetophones” after World War II and studied the technology. That knowledge was put to use in building tape recorders for sale to commercial radio stations. The consumer market did not take off until the 1960s when tapes became available in the form of cartridges and cassettes. Phillips Electronics NV brought out the cassette tape player in 1963. Eight-track players briefly became popular. In recent years, the tape-based technology has given way to compact disks featuring digitalized recordings.
Motion Pictures
Thomas Edison, who is credited with inventing motion pictures, regarded this technology as an extension of photography. Cinematic film is indeed nothing more than a series of still pictures shown in quick succession to create the illusion of motion. In 1824, Peter Mark Roget, the author of Roget’s Thesaurus, wrote a paper noting that visual impressions from a scene linger after the picture changes. If a number of pictures are shown rapidly one after another, they will seem to blend together in an image of continuous motion. Several photographers experimented with this effect during the 19th century. Eadweard Muybridge and J.D. Isaacs took a series of photographs with electrically controlled shutters which recorded race horses in motion. When these pictures were mounted on a revolving disk, the horses seemed to move. A device based on this principle, the zoetrope, was a popular toy for many years. Muybridge’s photographic studies of human beings and animals in motion may have been the inspiration for Edison’s experiments with motion pictures which were likely done by his assistant, William Dickson.
Edison’s “kinetoscope”, invented in 1888, consisted of a large box with a screen inside. Still photographs were attached to a cylinder rotating behind the screen. The viewer looked through a small hole to see moving objects. An early venue for Edison’s invention was the peepshow in penny arcades. His films were also shown during interludes between vaudeville shows. In 1893, Edison developed a new type of machine which used celluloid film. “Kinetoscope parlors”, devoted exclusively to this new medium, were established in Ottawa, New York City, and other cities in the following year. For a nickel, the viewer could watch thirteen seconds of animated entertainment on fifty feet of film. Several inventors found a way to project pictures onto an exterior screen. The Latham family of New York City invented a projection device called the “Eidoloscope” in 1895. Seven months later, Auguste and Louis Lumière showed their first film to Parisian audiences using an improved projector, the “Cinématographe”. Within weeks, the Lumière brothers were drawing 2,500 people a night to this new type of entertainment.
The first films were simple spectacles of motion. “Fred Ott’s sneeze” was the title of a kinetoscope production from Edison’s studio in West Orange, New Jersey. The Lumière brothers’ film showed children horsing around, workers punching out on a time clock, and a train pulling into the station. A single stationary camera recorded outdoor scenes in direct sun light. Around the turn of the century, film makers began to experiment with the dramatic potential of the new medium. A French director, George Méliés, was first to create motion pictures that followed a story line. His Cinderella and Trip to the Moon employed photographic tricks. In 1903, Edwin S. Porter produced The Great Train Robbery, featuring “Bronco Billy” Anderson. Not only did this production involve more advanced editing techniques, but it also was the first time that an actor became a “star”. In 1908, a group of independent film makers began working in southern California. That was because an association of companies holding patent rights to this technology were attempting to keep unlicensed companies out of business. The unlicensed operators wanted to be near the Mexican border in case U.S. courts imposed an injunction against them.
The era of silent films produced a rich crop of celebrities including Charlie Chaplin, Mary Pickford, and John Barrymore. By 1908, the roles of actor, director, camera operator, screen writer, and laboratory technician had become separate functions. Film was now shot in lighted studios. Animated cartoons, which had first appeared in 1906, became popular in the following decade. Hand tinting added coloration to films. Experiments done at the time of World War I added sound to the visual component. By converting sound into light beams that could be recorded on film, the oscilloscope made it possible to synchronize sight and sound. The first “talking picture” was Warner Brother’s The Jazz Singer, which opened in New York City in October 1927. Walt Disney’s talking cartoon Steamboat Willie, featuring Mickey Mouse, came out in the following year. Nearly every major studio converted to talking pictures within two years of its introduction. The 1930s and 1940s were a golden age of Hollywood filmmaking as large studios such as MGM, Warner Brothers, Paramount Pictures, and Universal turned out a steady stream of motion pictures aimed at mass audiences.
As tape recordings and compact disks have replaced the phonograph record, so videotapes have increasingly been used to record visual motion. The first videotape recorders were produced in the 1950s. Ampex Corporation began selling them to television stations in 1956. Consumer videocassette recorders (VCRs) came along in 1976 when Sony introduced its Betamax machine. Including a television set, this device cost over $2,000. As the Betamax and VHS formats competed for dominance, prices came down and videocassette recorders grew in popularity. In 1983, the U.S. Supreme Court settled a lawsuit brought by Disney and MCI against Sony alleging that home taping of television shows infringed upon their copyright. Its ruling in favor of Sony gave further impetus to this practice and to VCR sales. Having failed to stop videotape recordings, film producers set up departments to distribute copies of their own films. A new industry was created to rent or sell videos to individuals for viewing in their homes.
Radio
The radio is an electronic device which receives audio signals from electromagnetic waves. Commercial radio uses waves in a frequency range between 550 and 1,600 kilocycles per second. To produce radio signals, a microphone converts sound waves into electrical impulses which are then amplified and used to modulate carrier waves created by an oscillator circuit in the transmitter. (Modulation means to create waves in various patterns of frequency, amplitude or phase which carry information.) The modulated waves, again amplified, are directed to an antenna which converts them into electromagnetic waves that travel through space. At the other end of the transmission, antennae attached to a radio receiver catch some of the waves that have bounced down from the ionosphere. If the receiver is tuned to the same frequency as the waves, it will amplify the signal, remove the modulations, and feed the signal into a loud speaker that converts its electrical impulses back into sound.
In 1873, a Scottish physicist, James Clerk Maxwell, published a treatise which included a set of mathematical equations to describe the nature of electromagnetic waves. Fifteen years later, Heinrich Hertz built a device to generate radio waves. Guglielmo Marconi gave the first practical demonstration of radio communication in 1895. He used Hertz’s spark coil to transmit the letter “S” in Morse code and a coherer invented by Edouard Branly to receive this message a mile away on his family’s estate near Bologna, Italy. Marconi worked on improving this equipment until he was able to send a message across the Atlantic ocean in 1901. Three years later, Sir John A. Fleming built the first vacuum tube to detect radio waves electronically. Lee De Forest’s “audion” tube, which placed a wire between the filament and plate, offered a way to amplify them. In 1913, Edwin H. Armstrong patented the circuit for a regenerative receiver which, improving upon the audion, fed the radio signal back through the tube several times so that it oscillated with more power and could send long-range signals. Armstrong’s second great invention, the superheterodyne receiver, mixed the voltage from the incoming signal with that from a built-in oscillator so that clear signals were heard at particular frequency settings.
During its first twenty years, radio was a toy for amateur operators. It proved useful in detecting distress signals from ships on the ocean. David Sarnoff became famous as a wireless operator who took telegraphed messages from the Titanic. In 1920, a ham operator in Pittsburgh named Frank Conrad began broadcasting baseball scores and recorded music to his fellow operators. A local store provided free records in exchange for being mentioned in the broadcasts. When a Pittsburgh department store ran a newspaper advertisement offering to sell radio receivers, a Westinghouse vice president saw a business opportunity in manufacturing this product. Westinghouse’s “cats-whisker” crystal radio sets sold for $25. To stimulate product demand, the company set up the world’s first commercial radio station in Pittsburgh with the call letters KDKA. This station began regular broadcasts on November 2, 1920, starting with a report of returns from that year’s national election. A successor to the Marconi Wireless Company, the Radio Corporation of America (RCA), was established in 1921 to market radio receivers. In 1926, RCA organized the first radio network, National Broadcasting Company.
The three great pioneers of commercial radio - Lee De Forest, Edwin H. Armstrong, and David Sarnoff - were a contentious bunch who frequently battled each other in the courts. De Forest sued Armstrong for patent infringement in 1915, winning on appeal twenty years later. As general manager of RCA, David Sarnoff was an early champion of Armstrong’s inventions who later became an implacable foe. To deal with the problem of static, Armstrong worked for eight years to develop a radio system whose signals were based on frequency modulation (FM) instead of amplitude modulation (AM). He set up an experimental laboratory atop the Empire State Building in New York City where he was able to complete this work in 1933. Armstrong established his own “Yankee network” for FM broadcasting. Sarnoff, who was not eager to scrap millions of AM radio sets, had Armstrong evicted. Sarnoff also lobbied the Federal Communications Commission to reserve the FM frequencies for a new device, television, which his company was developing. In 1954, Armstrong committed suicide by jumping from a 13th floor apartment window.
Television
Television uses a technology which broadcasts both aural and visual images on electromagnetic waves. Television waves occupy frequencies in a range between 54 and 216 megacycles per second (VHF) and between 470 and 890 megacycles per second (UHF). They have among the longest wave lengths in the electromagnetic spectrum. To create pictures, an electronic scanner passes across a plate coated with a photosensitive material with a zigzag motion covering 525 lines thirty times a second. The plate consists of a thin sheet of mica coated with a silver-cesium compound and backed with a metallic conductor. Light hits the cells in this mosaic at various intensities causing each cell to emit electrons and retain a positive charge. The scanner, passing its beam across the cells, produces an electrical signal as it releases the charge. This signal passes through an amplifier and goes out in carrier waves. The broadcast signals are picked up by antennae in the television receiver. To reconstruct images, an electron beam scans the fluorescent face of a cathode-ray tube line by line, causing the individual cells to glow in a visual pattern. As thirty still pictures per second flash across the screen, persistence of vision creates the illusion of motion.
An Irish telegraph operator named Joseph May first noticed that sunlight affected the electrical resistance of instruments made of selenium. That discovery, made in 1861, led to experiments with the electrical conductivity of selenium. In 1884, a German inventor named Paul Nipkow received a patent for a selenium-based television device with a mechanical scanner. It consisted of a pair of perforated disks, one at either end of the transmission, which rotated at a constant speed. Light from the subject passed through the moving holes to hit selenium cells and be changed into electrical signals. The disk at the receiving end converted electricity back to light, which could be viewed through an eye piece. In 1897, Karl Ferdinand Braun invented the cold cathode ray or “Braun” tube which allowed images to be produced by non-mechanical means. An Englishman, Campbell Swinton, proposed in 1908 that “distant electric vision” was possible using cathode-ray tubes at both ends. Experimenters in Germany, Russia, and France worked to develop a practical model of this system. In St. Petersburg, Professor Boris Rozing of the Technological Institute had already applied for a patent on a system that used two mirror drums to scan and dissect the image and a cathode-ray tube to receive it. An engineering student named Vladimir Zworykin assisted in this work.
In the United States, an alliance was formed between General Electric, AT&T, and Westinghouse Electric after World War I to pool patent interests related to radio. Television research also took place under their sponsorship. Westinghouse and General Electric supported research by Charles F. Jenkins, who had invented the motion-picture projector in 1895. In 1922, he applied for a patent on a device that transmitted wireless pictures with prismatic rings as a scanner. In 1923, John Logie Baird filed a patent application in London for a television system using a Nipkow disk. Vladimir Zworykin, now working for Westinghouse Electric, then filed a patent for an all-electric system using a Braun tube as receiver and an improved camera tube. Baird gave a three-week demonstration of television broadcasting at Selfridge’s Department Store in April 1925. Later that year, Zworykin demonstrated his all-electric system to a group of Westinghouse Electric executives. Picture quality was poor, and the executives ordered Zworykin to “work on something more useful.” Edouard Belin demonstrated a device using cathode-ray tubes to three French officials in 1926. In 1927, an Idaho farm boy turned inventor, Philo T. Farnsworth, patented the world’s most advanced television camera tube. He called it an “image dissector”.
Radio Corporation of America (RCA), a General Electric subsidiary, redoubled its efforts to perfect a television system after AT&T’s research laboratories demonstrated television transmission between New York City and Washington, D.C. in April 1927. The picture quality was good, even with mechanical equipment. RCA’s vice president, David Sarnoff, sent Vladimir Zworykin on a trip to Europe to inspect work done there to develop television. Zworykin was most impressed with the system developed by Edouard Belin and associates in France. He thought that their cathode-ray tube, with a few adjustments, might solve the problem of television reception. Back in Pittsburgh, Zworykin pitched this hopeful message to his superiors at Westinghouse. They were not interested. He then met with Sarnoff, who pledged $100,000 to support Zworykin’s research efforts. Now working for Sarnoff, Zworykin hired Belin’s chief engineer, Gregory Ogloblunsky, and together they built a 7-inch cathode-ray picture tube called a “kinescope”. Zworykin then turned his attention to the camera tube. The best equipment was Philo Farnsworth’s image dissector. Zworykin visited Farnsworth in San Francisco and was shown everything. Sarnoff personally offered to buy out Farnsworth for $100,000, but Farnsworth declined. Zworykin then developed his own camera tube employing some of Farnsworth’s concepts.
Zworykin filed a patent application for this “iconoscope” in November 1931 but delayed its announcement. In 1934, Farnsworth gave a public demonstration of electronic television at the Franklin Institute in Philadelphia. In the following year, the U.S. Patent Office awarded him “priority of invention” for his television system. RCA refused to pay royalties. Farnsworth began broadcasting to a small audience from a Philadelphia suburb in 1936. Meanwhile, a new holding company had been formed in England, called Electric and Musical Industries Ltd. (EMI), which was partially owned by RCA. When EMI applied for permission to begin broadcasting in 1933, it brought a strong reaction from Baird Television Ltd., which had been doing an experimental broadcast for the BBC since 1929. The General Post Office and BBC established a commission of inquiry. It ultimately decided to establish a television service in London, utilizing technical apparatus from both companies. The London Television Service began regular broadcasts in November 1936. Its success prompted Sarnoff to start broadcasts in the United States. The first such event took place at the 1939 New York World Fair. World War II intervened. Although RCA by then had started to pay royalties to Farnsworth, his patents expired in 1946. Farnsworth then quit the business. RCA had the U.S. television business to itself.
In the United States, the Federal Communications Commission (FCC) was given the authority to regulate commercial broadcasting. It granted licenses to commercial stations to broadcast on certain frequencies. Sarnoff’s firm had created the first radio network, NBC, in 1926. The second network, Columbia Broadcasting System (CBS), was formed two years later from a string of independent radio stations owned by William Paley, son of a Philadelphia cigar manufacturer. Its successful radio programs featuring comedians such as Jack Benny and Red Skelton earned big profits. Paley’s ambition after World War II was to beat NBC in radio competition. NBC had a commanding technical lead in television. Television broadcasting was then restricted to frequencies in the VHF band, which were only enough to support twelve channels nationwide. Paley petitioned the FCC to reserve frequencies in the UHF band for a system of color television which CBS hoped to develop. When the FCC denied CBS’s petition in April 1947, it clarified industry standards and started a rush of applications for commercial broadcast stations. The FCC then froze permits to construct television stations for four years. The scarcity of VHF licenses created a seller’s market for television advertising and a buyer’s market for programming.
At first, advertising agencies representing corporate sponsors controlled the programs that appeared on television. Moving away from single sponsorship of the programs, advertisers gave up the right to license programs while reserving the right to censor objectionable materials. The television networks, principally CBS and NBC, negotiated with independent production companies for ownership of the programs in exchange for a slot in prime-time broadcasts. Texaco Star Theater hosted by Milton Berle dominated early television audiences. Then came the sitcom, of which I Love Lucy was a notable example. This comedy series starring Lucille Ball and her husband, Desi Arnaz, made effective use of television’s visual potential. Shows began to be taped, permitting reruns. After the quiz-show scandals of the mid 1950s, the networks turned to Hollywood for programming content. As U.S. commercial television became concentrated in three major networks, viewers wanting greater variety subscribed to cable-television services. In 1980, Ted Turner’s Cable News Network began broadcasting live news reports from around the world 24 hours a day. Since 1991, western-style television has come to the masses of Asia through satellite broadcasts and cable television. The STAR network reaches 38 countries with a combined population of 2.7 billion people.
Computers
The computer differs fundamentally from other electronically based cultural technologies in its ability not only to record images and information but manipulate them in desired ways. Modern computers are a collection of electronic components and peripheral devices that perform the following functions: (1) They enter data into the system. (2) They store the data in memory. (3) They control the computer’s own operation. (4) They perform processing operations as the data is manipulated. (5) They exhibit the results of the manipulation externally. The most common method of entering data is to type letters and numerals on a keyboard. Attached printers or video monitors (cathode-ray tubes) output the results of computation. The computer’s memory consists of electromagnetic codings on a coated disk inside the processing unit. Its operating system is a software program - codings in a symbolic language - which controls the machine’s processing activities. Additionally, the system may accept other canned or customized programs that perform functions such as word processing, spreadsheet creation, or graphics.
The computer’s invention comes from a tradition of improved calculating machines. John Napier, discoverer of logarithms, published a work in 1617 which proposed a new way to multiply and divide using mechanical “rods” or “bones”. The French philosopher, Blaise Pascal, built a calculating machine with geared wheels in 1642 to help in his father’s business. In 1671, Gottfried W. von Leibnitz built a machine based on binary arithmetic that could calculate square roots. Meant to compute astronomical tables, it was called the “stepped reckoner” because calculations were performed by rotating a drum with stepped teeth which represented numerals through variation in length. Commercial calculators were introduced in the 19th century. In 1820, Charles X. Thomas built a machine following Leibnitz’s design that was the first to be used successfully in business. Another machine performed arithmetical calculations by rotating wheels with retractable pins which protruded through a plate at particular numerical settings.
Unlike calculating machines, computers can perform operations that depend on meeting certain conditions. One of the first machines with that capability was the Jacquard loom, invented in 1801. Joseph Marie Jacquard, a French weaver, developed a technique to weave designs automatically in cloth. Holes punched in cards controlled the loom’s operation. An English inventor, Charles Babbage, was impressed with a portrait of Jacquard that had been created by a process requiring twenty-four thousand cards. Babbage is credited with inventing in 1835 the world’s first digital computer. Called the “Analytical Engine”, it had one deck of punched cards for the data and another deck to control the operating routine. Plungers passed through holes in the cards to feed data into the program. The computer’s memory consisted of fifty counter wheels to store numerical information. The machine permitted conditional transfers (“if statements”) by which a comparison of numbers directed the operation to other points in the processing routine. There were also iterative loops or subroutines (“do loops”) like those in modern computer programs. Although Babbage did not build a working model of this machine, he did produce drawings which showed all its components.
In 1886, a statistician named Herman Hollerith had the idea that a machine fed with punched cards might compile data collected in the U.S. census. He built such a machine for the 1890 census which allowed its work to be done in one third the time that the previous census had required. Hollerith’s machine held the punched cards above trays filled with mercury. When metal pins dropped through holes to reach the mercury, it completed an electrical circuit and added to the count. Certain positions in the cards held information indicating characteristics of the population. These fields were tabulated separately as cards passed through the machine. While similar to Babbage’s “Analytical Engine”, Hollerith’s invention used an electrical sensing device instead of mechanical feelers. In 1911, Hollerith and others formed a company which later became International Business Machines (IBM).
The computer would not have been possible were it not for the work of George Boole, an English mathematician and logician. Boole’s Treatise on Differential Equations, published in 1859, presented the concepts of Boolean algebra. This system holds that a proposition of logic can have only two values: true or false. Likewise, in binary arithmetic the digits of any whole number can be represented by one or zero. An American philosopher, Charles Sanders Peirce, realized in 1867 that the values represented in Boolean algebra could be expressed mechanically by “on” and “off” positions in switches built into an electrical circuit. That meant that someone could design a circuit according to the Boolean scheme which either stopped or passed electrical current depending upon whether the switch was open or closed. Such a circuit might perform both arithmetical and logical calculations. In 1937, George Stibitz of Bell Telephone Laboratories connected some batteries, wires, and lights on top of his kitchen table and gave the first practical demonstration of an electrical circuit governed by Boole’s principles.
The modern era of computing began at the time of World War II. Starting in 1939, IBM engineers worked with Professor Howard Aiken of Harvard to develop a fully automated electro-mechanical calculator controlled by punched paper tape. This machine, the “Mark I”, performed arithmetic computations and could check table references. It was a machine with wheels like Babbage’s except that electrical impulses controlled the switches. The first all-purpose electronic computer was the “Electronic Numerical Integrator and Calculator” (ENIAC) which two electrical-engineering professors at the University of Pennsylvania, John Mauchly and J. Presper Eckert, built with vacuum tubes instead of electro-mechanical switches. Its purpose was to compute firing tables for aiming artillery at German troops. The ENIAC computer could calculate in several minutes what might take a man equipped with a calculator forty hours. This machine consisted of a collection of 8-foot high cabinets, weighing 50 tons, which were filled with trays containing wired circuits and vacuum tubes. Work on the ENIAC was completed in February 1946 - too late to help in the war effort. Across the Atlantic, however, the British built a computer called “Colossus” which was used to break the German code.
A chance meeting at a railroad station between Herman Goldstine, the U.S. Army’s liaison with the ENIAC project, and John von Neumann, a mathematician at the Institute of Advanced Studies in Princeton, brought von Neumann’s immense talents to the design of computer architecture. In 1946, von Neumann, Goldstine, and Arthur Burks published a paper, Preliminary Discussion of the Logical Design of an Electronic Computing Instrument, presenting the concept of a computing machine in which both data and operating instructions were stored. Computer technicians no longer had to rewire the machine when new instructions were issued. This paper also discussed how computers might perform mathematical or logical calculations through step-by-step processing routines. Several universities built machines employing the von Neumann architecture. However, the technical challenge of computers is not limited to designing and maintaining hardware. Software is also a factor. Initially, computer programmers had to write detailed instructions in binary code which the machine could recognize. In the early 1950s, Grace Hopper of UNIVAC developed a “compiler” which would translate short, English-like statements into the machine language. A team at IBM developed the FORTRAN language for scientific programming applications.
Computers were first used for scientific research and large-scale government undertakings. UNIVAC I, developed by ENIAC’s inventors and Remington Rand engineers, was sold to the U.S. Census Bureau to help with the 1950 census. Federal research laboratories at Los Alamos and Livermore needed massive computing power to develop the hydrogen bomb. The U.S. space program spurred a demand for more advanced technology during the 1960s and 1970s. In the 1950s, the two largest computer manufacturers, Remington Rand and IBM, decided to abandon the scientific market in order to develop computers for business. IBM became dominant in that lucrative field, producing large “mainframe” computers that could handle payrolls, billing, and production processes. Control Data Corporation became the leading producer of “supercomputers” for scientific work. However, federal aid to universities for computer research dwindled in the aftermath of the Vietnam war. In 1971, Control Data’s principal computer designer, Seymour Cray, formed his own company to build supercomputers. Cray Research built the largest and fastest computers until the era of massive parallel processing.
Processing speed drove computer development during this time. The faster that computers could handle the calculations, the more computing power these machines had and the broader their range of applications. Speed is defined in terms of “clock period”, which is the shortest time it takes the computer to do a simple operation. The number of operations which a computer can perform in a second is called a “flop”. Howard Aiken’s Mark I computer could multiply two numbers in three seconds, so it had a clock period of 0.3 flops. The ENIAC computer, which used vacuum tubes for switches, had a clock period of 400 flops - 1,200 times faster. When transistors were substituted for vacuum tubes in the CDC 7600 computer, the clock period increased to ten million flops (or 10 megaflops). Computer speed increased enormously as new technologies of switching were introduced. Integrated circuits, invented at Texas Instruments in 1959, further increased speed by using miniature circuits. Flat transistors and wiring were embedded on small slices of silicon called “chips”. Intel Corporation’s invention of the microprocessor in 1971 provided integrated circuits with all the elements of a computer. Since then, as a rule of thumb, processing speed with microprocessors has doubled every 18 months.
The microprocessor, which is “a computer on a chip”, led to the revolution in personal computers that began in the late 1970s. The first working model of a personal computer was created at the Palo Alto Research Center in California. Here icon-based menus were developed for computer screens. In 1980, Dan Bricklin and Dan Fylstra wrote the software for VisiCalc, the first electronic spreadsheet for personal computers, meant to be used on the Apple II computer. In 1981, IBM brought out its own version of a personal computer licensing its DOS operating system from a small company called Microsoft. The Pac-Man video game became popular. The Lotus 1-2-3 spreadsheet for IBM and IBM-compatible machines came out in 1983. Apple Computer made a comeback in the following year with its popular Macintosh, a user-friendly machine with a mouse. Then, in 1985, Microsoft brought out the first version of Windows, also with icons and a mouse. Lotus introduced a program called Notes in 1990 which allowed computers to exchange documents. Microsoft increased its dominance of the computer-software field by licensing the MS-DOS operating system - about 90% of the world’s computers use it - and bringing out improved versions of Windows and other products.
The recent trend has been toward computer networks. In the late 1960s, a team at the University of Illinois hooked up 64 identical Burroughs computers in parallel to create the ILLIAC IV machine. Processing in parallel with several smaller machines gives the same speeds as a larger machine while allowing better access to the system. IBM is promoting the concept of “network computers” to replace personal computers in offices. The terminals are cheaper and system upgrades do not have to be installed on each machine. While computer networks began in an office setting, they soon spread to the home. In the 1980s, the French postal and telegraph service decided to hook up the entire nation to a computer information base. Millions of Americans subscribed to online services such as Prodigy, CompuServe, or America Online which gave them similar access. Computer users became aware of belonging to a limitless network of users, both individual and institutional, in all parts of the world. This system became known as the Internet.
The Internet started in 1969 when the Pentagon contracted with a consulting firm in Cambridge, Massachusetts, called BBN to construct the ARPANET network. In 1972, a BBN engineer named Ray Tomlinson sent the first E-mail message using @ in the address. Initially ARPANET connected computers at four large universities on the west coast. As computers at other universities and research centers were added, the system evolved into several commercial subnetworks. The World Wide Web began in 1991. Today, the Internet has grown into a global network that links more than 120 million computers. This system in the aggregate is so large and chaotic that new kinds of software have been developed to allow users to navigate in its “cyberspace”. Websites have been set up to focus communities of interest.
Diagram
Religion in a Broad Sense
Public values change in the course of history. These changing values reflect institutions that were dominant society at particular times. Each institution is associated with its own kind of valuable object. While economic and political functions are important, civilization is more concerned with the mental, cultural, or spiritual side of life. Here the softer elements of a culture work their way into people’s hearts. As an institution, religion comes closest to expressing this aspect of human experience. Therefore, this chapter will discuss the question: What, in a broad sense, is the religion of each civilization?
Religion speaks to the core of values which a society has. It often, but not always, includes worship of a God or gods. In his book An Historian’s Approach to Religion, Arnold Toynbee wrote: “If we set out to make a survey of the religions that have been practiced at different times and places by the numerous human societies ... our first impression will be one of a bewildering infinite variety. Yet ... this apparent variety resolves itself into variations on man’s worship or quest of no more than three objects or objectives: namely, (1) Nature, (2) Man himself, and (3) an Absolute Reality.” By the worship of nature, Toynbee meant the ritualistic religions practiced by precivilized peoples. By worship of man himself, he meant the association of religion with political institutions, or with, he wrote, “the worship of one’s own collective human power.” The worship of Absolute Reality referred to the higher religions which focused upon a transcendent spiritual being or God.
All three types of religion identified by Toynbee included worship of a God or gods. God brought human-like personality to the object of worship. Toynbee’s premise was that humanity worshiped what it most feared. When it stopped fearing an object, that object ceased to attract worship and religion moved on to something else that had not been tamed. Nature was worshiped when humanity was at nature’s mercy. With the arrival of agriculture, the food problem became less threatening than the problem of dealing with other human communities. Therefore, the object of worship shifted to political entities. The warring city-states worshiped local gods who symbolized their collective power. This epoch ended when the great political empires brought peace to a region. Human society then became less fearsome. It was time to worship life’s ultimate reality.
Toynbee’s three objects of worship - nature, man’s own community, and ultimate reality - are associated with religions of the first three historical periods. However, the first period in this case would be prehistory. Nature worship was the religion of tribal societies that existed before the first civilization began. Toynbee’s second object of worship - human communities - would relate to the prevailing religion in the first historical epoch, when governments established civic religions. The third object of worship - absolute or ultimate reality - would correspond to ideas associated with philosophically based religions. They belong to the second historical epoch. Humanity’s ultimate reality is approached through theoretical speculation. For many, it has led to the concept of a monotheistic God.
Toynbee’s analysis ends at this point, when religion appears to have been put into its final form. The third, fourth, and fifth epochs of world history lie ahead, seemingly without further religious progressions. Yet, religion, in a broad sense, has continued to develop beyond the stage of world religion.
Personality and Belief
Any successful religion achieves a balance between personality and belief. Belief comes first. This is what people think is literally true. It is the conclusions of learned persons, those grey-bearded doctors who are consulted on important matters. In Greek and Roman times, philosophy was on the cutting edge of belief. Philosophers had developed a method for discovering truth which allowed them, as Plato said, to separate true “science” from mere “opinion”. God is an infallible source of truth for religious persons. Modern society tends to believe more in the theories proposed by empirical scientists. The universe might have begun with a “big bang”. The human race might have evolved from other species of animal life through a process of natural selection. With respect to social phenomena, our society looks for sociologists or psychologists to supply believable answers. Someone who has an advanced degree from an accredited institution of higher learning is believed where the ordinary person’s views can be taken with a grain of salt.
Many advancements in knowledge of the natural world have involved mathematics. Alfred North Whitehead has identified the period between Pythagoras and Plato (6th and 5th centuries B.C.) and the 17th and 18th centuries A.D. as times when mathematics penetrated the public consciousness to an unusual degree. The Greek philosophical revolution of the 6th and 5th centuries B.C. was inspired primarily by geometry. Pythagoras believed that the world consisted of numbers. Plato was impressed by the purely theoretical nature of mathematical reasoning. A sign hung over the gate of his Academy: “Let no one enter here who is ignorant of mathematics.” Algebraic equations, introduced by the Mslems, became widely used by European mathematicians during the 17th century. Newton’s laws of physics were a set of equations describing force, velocity, and acceleration. Mathematical equations related to periodicity supported Kepler’s theories of planetary motion. Mathematics is a purely objective description of quantities or relationships in the natural world. It is the branch of knowledge most removed from human personality. Mathematicians tend, therefore, to be somewhat unsocialized.
Mathematically based knowledge has proven itself in dramatic ways. Centuries before Europeans knew of the Western Hemisphere, Eratosthenes of Cyrene calculated the earth’s diameter to within one percent of its actual size. Through catapults and other mechanical devices, Archimedes held Roman armies besieging Syracuse at bay for three years at Syracuse. Nuclear physicists in the 20th century created a type of bomb so awesome that its explosion over two Japanese cities forced Japan’s surrender in World War II. This was belief as basic as anyone might wish it to be. Religion, on the other hand, does not relate so well to the knowledge of mathematics or of the physical world. Its type of belief is more personal. A critical step in the creation of more sophisticated systems of religion is to apply sciences that began with mathematics and the study of nature to the realm of human behavior. That was Socrates’ role in the Greek philosophical tradition. He turned the practice of inquiring about the basic stuff of the world into questions about justice, goodness, and truth. After natural science had made a name for itself in formulating physical laws, the “social sciences” applied its method to studying dynamics of the market place and the human psyche.
This leads to a contradiction. On one hand, human beings are wanting primarily to know about themselves. Religion latches on to beliefs reached by the most advanced methods of acquiring knowledge. Christian theology made use of Platonic and Aristotelian philosophies. The economic theories of Adam Smith and Karl Marx and the psychological concepts of Sigmund Freud and Carl Jung have become the basis of quasi-religious ideologies. On the other hand, impersonal “laws of nature” control events in the physical world. Hippocrates, the father of western medicine, denied that the gods caused disease or that its effective treatment consisted of rituals and prayers to appease their anger. He looked for natural causes of disease and sought treatments guided by his own experience with successful remedies. The scientific method teaches that the scientist should make dispassionate observations of nature and base his conclusions upon observed fact rather than personal intuition. He should leave himself out of his theories as much as possible and become a mere instrument of discovery.
Modern science has tended to displace man from his place at the center of the universe. Copernicus proposed that the sun, not the earth, was the center of the solar system. Darwin saw human beings as a product of evolution from lower species of plant and animal life. Scientists maintain that impersonal laws of nature govern this world. Yet, if the most advanced systems of belief arrive at a purely objective kind of knowledge, it poses a problem for human civilizations. The great mass of humanity cannot accept this type of culture; for man cannot live on ideas alone. While having intellectual credibility, a culture consisting only of ideas is also sterile and cold. This situation creates a spiritual crisis because human beings cannot relate to the purely objective. They need models of personality.
A study was done at the University of California at Santa Barbara which showed that “most people more easily solve a problem when it is cast in social terms than when essentially the same problem is cloaked in abstract numbers and symbols. In one experiment, they reworked a classic abstract logic puzzle into new (social) scenarios ... for instance, subjects were asked to imagine they were a bartender whose task was to make sure there was no one at the bar who was underage ... Fewer than 25 percent of the subjects got the problem right when it was put in terms of numbers and symbols. But about 75 percent answered correctly when the subjects were given the same problem cast in human terms.” Authors of the study speculated that the need for personal references in solving theoretical problems illustrates the “Stone Age intelligence” passed along in our genes.
Objective knowledge is based upon proper delineation of abstractions. One follows certain logical procedures to move between the abstract and the specific. Proper classification and processes of reasoning help to retrieve the desired information. Personality, on the other hand, gives people something to imitate. One copies a finished model without thinking. Imitation, which is the original basis of knowledge, comes before reasoning. The use of personal images in religion may be a learned response from childhood in coping with situations beyond one’s ability to comprehend. As the child relies upon an adult parent to rescue him from dangerous situations, so the religious devotee cries out to God for help in troubled times. This is a complete and emotionally satisfying kind of response. Objective concepts, while demonstrably true, are less able to arouse the emotional side of intelligence. Therefore, no culture intending to touch a society’s larger population can consist of this element alone. Philosophies cannot inspire that degree of personal interest and attachment; only religion can.
Holidays
As religion changed through the ages, it took on the coloration of institutions that dominated the successive societies. A way to track those changes may be to review adaptations made to institutions of sacred space and time. When the Roman church planned England’s conversion to Christianity, Pope Gregory I issued orders to the effect that “the temples of idols ... should on no account be destroyed. He (Augustine) is to destroy the idols, but the temples themselves are to be aspersed with holy water, altars set up, and relics enclosed in them ... In this way we hope that the people may abandon idolatry ... and resort to these places as before.” Places sacred to pagan religions were thus converted to serve the needs of advancing Christianity. The Gospels tell the story of Jesus driving the moneychangers out of the Temple in Jerusalem. In this case, Jesus was able to prevent a place sacred to Judaism from being corrupted by the values of money. Now, in CivIII, bingo games operating out of church basements compromise that ideal somewhat.
The advancing epochs have also converted sacred time to new purposes. Jesus may have expelled moneychangers from the Temple, but that has not stopped the holiday which commemorates his birth from being turned into the year’s most intense shopping season. It is estimated that purchases of Christmas gifts contribute more than a third of an average U.S. retailer’s annual sales, and between half and three quarters of the annual profits. As the Christian holiday of Christmas has been shoved aside by commercial Christmases, so its Christmas once replaced holidays belonging to pagan civilization. (See this page.) The worshipers of Mithras, a Persian god popular among Roman soldiers, celebrated their god’s birthday on December 25. The Roman holiday of Saturnalia, which began on December 17 and lasted for several days, was a festival honoring the god Saturn who had civilized the Italian people. Gift giving was a part of its tradition. In 440 A.D., the Christian church decided to celebrate the Feast of the Nativity at that time of year.
Primitive Religion
Whenever in the dim and distant past humanity has confronted impersonal phenomena, it has turned the incomprehensible whole into human forms. Primitive peoples have seen the elements of nature as old women, young hunters, great fathers, etc. The ancients divided the night skies into constellations bearing human and animal forms. Man’s first attempt to understand patterns in nature was through mythological explanations. There were stories of struggle or intrigue between gods and goddesses whose outcome set patterns in the natural world. For example, the Greek myth of Demeter and Persephone, celebrated annually at Eleusis, explains the changing seasons as a covenant which the Gods made with Saturn to keep the lovely Persephone at certain times of the year. So belief and personality were fused into a single structure of knowledge when man first began to try to understand nature.
It is misleading to suggest that primitive peoples believe in the divinity of nature. To believe is our religious posture. Primitive religion began with a fear of nature, not so much in the sense of believing it to be evil as of man’s being thrown helplessly into a dangerous world. Man had to cope with the danger, and, as always, that was done with knowledge. Not knowing cause and effect, man projected his own mental outlook upon nature. Natural objects were thought to have personalities similar to his own. Events took place intentionally, as if human minds were directing them. This animistic view, which makes the natural world a mirror image of man’s own subjective mind, is characteristic of primitive religion. The Moon, stars, and sky were gods endowed with a human spirit, as were other natural elements. They had different spheres of influence, and had to be appeased or approached individually to win their favor.
Gradually one set of gods became associated with the sky, and another set of gods with the earth. The sky gods tended to be male, and the earth gods female. The gods of the sky exuded masculine energy when the Sun poured out its radiant energy upon the world or when Zeus, the thunder-darter, hurled bolts of lightning down from his throne. The earth gods or goddesses, including trees and the earth itself, were more maternal and nurturing; they were a quiet source of natural bounty. So the gods became like a human family, with a father and mother, sons, daughters, and other relatives in various roles. The central theme of this religion was the mating of earth and sky, or, in other words, the showering of sky-brought rain upon the earth which would moisten the soil and let abundant crops grow.
Fertility was the central object of this religion. Fertility promised abundance, both in regard to producing numerous offspring and providing plentiful food. As these came from nature, nature became the target of religious pressure. The fertility cults of Egypt, India, and the Middle East included sexual rituals meant to suggest to nature what action should be taken. Animals thought to have sexual prowess, such as snakes and bulls, were worshiped with an eye to improved reproduction. Another religious concern was fertility of the fields. In medieval England, public marriages were performed on May Day or Pentecost to inspire the newly sown crops. In Java, peasant couples copulated in rice fields for the same reason.
Such rites illustrate the ancient practice of “sympathetic magic”. The theory was that gods, being like people, sometimes had to be shown what to do. To make nature more responsive to their wishes, men would imitate a certain natural process, or partially perform it, or act as if it had already happened. A barren woman would clutch a baby doll in hopes of becoming pregnant. A voodoo witch would stick pins in the wax figure of a person meant to die. That is why festivals of the harvest were commonly associated with sexual promiscuity. The earth was supposed to accept the scattering of seed and prepare for a bountiful crop. Sympathetic magic gave primitive religion its theory of effectiveness. The magician was thought to be able to tap into nature’s immense power by personal gifts or through devices such as amulets or fetishes that had special power. Another magical technique was the use of words in blessings or curses, or other verbal formulations, to cast a spell.
As agriculture became the basis of economic life, a professional priesthood presided over ceremonies intended to produce a successful crop. In order to be effective, these rituals had to be performed by someone with enough knowledge to perform the ceremony properly. The key to its effectiveness was thought to be correct execution. Organized religion became a technocracy of magic designed to manipulate or appease the spirits to achieve certain results in the natural world. Human sacrifice was often an element in ceremonies relating to the harvesting of crops. Sometimes human beings were ritually executed, and sometimes substitute objects such as sheep. Primitive peoples believed it was necessary to bury someone to fertilize the soil. The rites of human sacrifice became associated with the myth of gods who died and were reborn in imitation of grains which annually sprouted from plantings in the ground. The Egyptian cult of Osiris was one of many cults of vegetal regeneration foreshadowing the death and resurrection of Christ.
Religion has a sense of the sacred as distinct from the secular or profane. Originally there were sacred places thought to be inhabited by the gods. Priests kept up the shrines located on those sites. The people of Sumer built temples on sacred ground. There Greeks and Romans observed the cults of their local gods. In many Hindu homes, a room is reserved for the household deity, an image of Vishnu, who watches over and protects the family. Jacob built an altar at Beth-El to commemorate his dream of a ladder extending to heaven; he called that sacred place “the house of God.” With the advent of written history, the locus of the sacred shifted from space to time. The Jewish Sabbath is a day in the week reserved for religious worship. Festivals and holidays are times in the year devoted to particular divinities or saints. Also, the idea of a time when the whole world would suddenly be transformed at God’s direction grew up in Judaic expectations of the future.
It seems that the concept of an anthropomorphic or manlike God arrived relatively late in religious history. Totemic animals were earlier objects of worship. The selection of particular animals to serve as tribal emblems corresponds to our own use of symbols such as the design on flags to identify different communities. In primitive society, the particular animal which a tribe had adopted as its totem or spiritual emblem was considered taboo. People of the tribe were not allowed to eat its flesh except on rare ceremonial occasions. Afterwards, there was a period of transition when gods might be part human and part animal. The Egyptian sphinx is an example. Ovid’s Metamorphoses describes in hexameter verse the changing of animals into gods, and vice versa. The prophet Daniel dreamed of four political empires emblematically represented by animal hybrids followed by another which was ruled by “one like a man”. The first fully human gods may have been powerful men who had died but whose influence remained. Primitive peoples believed that the ghosts of such persons might return to haunt the living and had therefore to be appeased.
Holidays in this Period
Festivals and holidays have been a part of human culture since prehistoric times. In the age of nature religion, special rituals were performed at the times of the summer or winter solstice and at the vernal or autumnal equinox. Christmas Day comes four days after the winter solstice. May Day was a pagan festival which celebrated the coming of spring. Halloween is related to an ancient Celtic festival which marked the beginning of the new year. (Both of these seasonal celebrations were converted into something else in a subsequent age. May Day became an international labor holiday because of a general strike held in the United States and Canada on May 1, 1886 and a bombing that occurred three days later in Chicago. Halloween is the vigil of All Saints Day, a Christian holiday commemorating a group of martyrs from Roman times. Gregory I moved its festival date from May 15 to October 31 to take advantage of the earlier pagan holiday.)
Religion in the First Epoch of Civilization
With the first civilization came a shift from nature worship to worship of the politically organized community. The gods of nature were converted into gods of parochial states. The states were not only political entities but also objects of worship. Each city had a deity which looked after the well being of its inhabitants. The local god was nominal master of the human community in whose stead the priest-king ruled. In Sumer, the ranking of a city depended upon the order which its god held in the divine assembly. Eridu was the holiest city because it had the shrine of the god which had created mankind. Shrines were located in the temple, which was also the hub of community life. There was usually a huge statue of the god in half-animal form and an altar to receive sacrifices. The statue was sometimes regarded as the image of the god, sometimes the god itself. A group of temple priests and priestesses, servants of the god, performed the sacrificial ceremonies. The priest-king was both the highest servant of the god and its personal representative.
In becoming identified with political institutions, the new order of religion had to contend with the prehistoric cults of nature worship. The two religious systems managed to coexist. “In Egypt,” wrote Toynbee, “we find the worships of the Sun, the Corn, and the Nile surviving side by side with the self-worship of the cantons. In Sumer and Akkad we find the worship of Tammuz and Ishtar surviving side by side with the self-worship of the city-states. In China we find ... an annual agricultural ritual, in which the prince communes with Heaven and ploughs the first furrow of the new agricultural year, surviving side by side with the self-worship of the Contending States ... In this gradual, peaceful, and imperceptible religious revolution, the new religion has not only imposed itself on the old one; in many cases it has actually commandeered one of the old Nature gods to serve also as the representative of the new worship of parochial collective human power.”
Toynbee gave examples of nature gods which had been appropriated by cities or tribes. He suggested, on the basis of the covenant delivered on Mt. Sinai, that Jehovah had been a volcano god or weather god before becoming the tribal war-god of Israel. Pallas Athena, Zeus’ daughter, was both a war god and patroness of olive cultivation before she became the personal guardian and spirit of the Athenian city-state. The supreme god of Egypt, Amun-Re, was a combination of Amun, “the breath of life”, sometimes portrayed as a ram, and Re, who was the sun god. Amun was the chief god of Thebes, capital of the Egyptian empire after the Eleventh Dynasty. Pharaoh was considered to be a living god, son of Re, begotten by immaculate conception.
Prior to the Fifth Dynasty, Pharaoh was a god in his own right, but the cult of Pharaonic worship clashed with the old nature-worshiping religion. A powerful priesthood at Re’s holy city of Heliopolis in northern Egypt had organized all the separate nature cults into a pantheon of nine nonhuman gods among whom Re, god of the sun, was chief. The designation of Pharaoh as son of Re, linking his divinity with Re’s, was therefore a concession to the Heliopolis priests and a sign of Pharaoh’s weakening power.
Conflict later developed between the political cult of Pharaonic sun-worship and a popular cult which worshiped Osiris, god of the Nile river and vegetation. As vegetal life annually dies and is reborn, so Osiris, murdered by an evil brother named Set, was brought back to life through the patient labors of his wife, Isis. The kingdom then was passed on to their son, Horus, represented by a falcon. Horus’ victory over Set was politically significant because his totemic representatives, the pharaohs of the First Dynasty who came from the southern part of the country, had conquered the northern Delta region where worship of Set was concentrated. The myth of Osiris also fed the cult of personal immortality centering upon Pharaoh’s funeral arrangements. After his death, Pharaoh was thought to have rejoined the gods and become associated with Osiris, lord of the underworld. Pharaoh himself survived in an embalmed body with a spirit kept alive by unceasing rituals and prayers. Pharaoh’s faithful subjects could themselves participate in the afterlife through his intercessions.
In Mesopotamia, the political rulers were generally content to rule as bailiffs of a god. Neither of the great kings, Lugalzaggasi and Sargon I, who unified this region in the 24th century B.C. claimed divinity himself, although Sargon’s grandson did. Egypt was unique in the degree that God was associated with a living man. This man, Pharaoh, was an archetype of God as a Great King. The temple became like a royal court where worshipers petitioned the god for favors. One trembled in his presence, bowed, and offered prayers. If a catastrophe befell the kingdom, it was thought that the king had offended its god in some way. The fortunes of humanity were dependent upon pleasing the gods by means of proper rituals. While earthly kings ruled by divine authority, the deities themselves were understood in terms of the majestic personalities of kings. So gods shed their animal forms and became human.
The Egyptian tradition of living deities influenced religious practice in the Greek and Roman empires. The two greatest military leaders of western antiquity both picked up this idea when they came to Egypt in the course of their conquests. Alexander the Great entered Egypt in 332 B.C. with an invading Greek army. Fascinated with Egyptian religion, he traveled 400 miles to a remote oasis in the western desert to consult with the oracle of Amun-Re. The priests told Alexander that he was Amun-Re’s son. Thereafter Alexander and his Hellenistic successors claimed divinity as imperial rulers of Egypt. Julius Caesar, an admirer of Alexander, succumbed to the god-king tradition while consorting with Cleopatra in 48 B.C. A cult worshiping him as a god was established. The Roman Senate, at Octavian’s urging, officially confirmed this institution two years after Caesar’s death. Although Octavian himself resisted deification, the imperial dynasty which he established fostered a cult of emperor worship. As an expression of their religious patriotism, Roman citizens were expected to offer sacrifices to the emperor’s “genius” or divine spirit.
Besides self-flattery, deification of the emperor served a useful political purpose in building religious support for the imperial regime. Belief in the emperor’s divinity warded off possible assassinations by ambitious or disgruntled soldiers. Some emperors preferred to rule as vice-regents of a god, reasoning that prospective assassins would be more effectively deterred if they believed that a vengeful god might survive a successful attempt. The emperors also used religion to achieve particular political objectives. For example, Ptolemy I attempted to hellenize the cult of Osiris-Apis in order to create a religious bond between his Greek regime and the native Egyptian population. The god Apis (the bull) was renamed Serapis and given a Greek visual appearance. A temple to the new god was built in Alexandria. Although Greeks were attracted to the cult of Serapis, the Egyptians continued to worship Osiris-Apis as before. The desired integration of cultures never took place. A better known example would be the decision of Antiochus Epiphanes IV to erect a statue of Zeus Ouranios in place of the altar located in the Temple at Jerusalem. His aim was to make Yahweh a local god within the pantheon of gods associated with nations in the Seleucid empire. This infamous act sparked the Maccabean military revolt.
Some Roman emperors, mad with absolute power, proclaimed their divinity in provocative ways. Caligula announced that he was a god equal to Jupiter. He established a temple cult of emperor worship and appointed his favorite horse to be one of the priests. Nero ordered a 120-foot high statue of himself to be erected with solar rays projecting from his head in the manner of Phoebus Apollo. The emperor Domitian deified members of his immediate family, organized a new order of priests to attend to their worship, and ordered government officials to refer to him in official documents as “Our Lord and God”. Numerous Christians were executed for refusing to offer sacrifices before his image. Most emperors, however, regarded religion simply as a tool of statecraft. They gave the gods of conquered peoples an honorable place within the Roman pantheon. Heliogabalus introduced the cult of Sol Invictus (“the unconquered sun”) based upon a Mesopotamian sun-god. Constantine the Great immersed himself in its divinity even after accepting Christianity. Julian the Apostate, Constantine’s nephew, tried to make Neoplatonism a state religion.
In the Far East, emperors continued to exercise the ancient role of priest-king well into modern times. Each year, Chinese emperors led the nation in performing sacrificial ceremonies at the Temple of Heaven which included incantations to produce a successful crop. The supreme god was T’ien, or Heaven, which represented the governing force or order in the universe. Scholars interpreted this in an impersonal way while the masses prayed to T’ien as a god. The emperor, being the “Son of Heaven”, represented Heaven on earth. He stood at the top of a social hierarchy extending down through the family. The emperor’s decrees were considered an expression of God’s will so long as his regime retained the mandate of Heaven. The emperor of Japan symbolized the Japanese nation’s unity as a family. His ceremonial role was to pay homage to the spirit of the ancestors. The idea of “kami”, or sacred spirit, which the emperor represented in the Shinto tradition, led to a respectful attitude in daily life. Only when militarists seized control of the Japanese government did the emperor become a symbol of its warlike ambitions. Emperor worship in Japan was a type of ancestor worship, expressing the spirituality of a race of people.
Religious belief in the first epoch of civilization was the same as in the prehistoric period. Few questioned whether the gods existed or had real power. Existing belief was merely transferred from the nature gods to gods of cities, nations, or empires. The political religions retained popular consent by respecting ancient traditions and paying homage to the ancestral gods of local peoples. To this was added the respect that comes from demonstrating worldly power. The great kings of the earth had the power to reinforce belief by their splendor and might. In those days, people looked to divine spirits and demons for an explanation of events in the natural world. Disease was thought to be caused by demon possession. Treatment was intended to drive demons from the body. Babylonian astronomers acquired much knowledge of celestial bodies while pursuing primarily astrological objectives. Each planet was a god interested in the affairs of men. Divination and fortune telling was an important occupation at imperial courts. The Shang emperors of China consulted experts who read the cracks in bones. Roman generals sought the advice of augurs before commencing battle.
Personality continued to be exhibited in gods and goddesses who inhabited the natural world and sometimes intervened in the affairs of men. Some human beings such as Achilles were descended from gods or personally interacted with them. Kings and emperors actively promoted their own reputations. In writings carved on temple walls, Egyptian Pharaohs boasted of their great victories in battle. An inscription tells how the Assyrian king, Tiglath-Pileser I, killed 120 lions on foot and another 800 lions from his chariot. Darius I of Persia wrote at Behistan: “Fravartish was seized and brought to me. I cut off his nose and ears, and I cut out his tongue.” Civic religion also projected personal images through statues, ornamental carvings, and other pictorial representations. The Greek style of realistic sculpture brought personalities out with unprecedented vivacity.
One of the most effective ways to spread the monarch’s image was through coins. These visual tokens of his presence circulated far and wide. A Pharisee asked Jesus whether it was lawful to pay taxes to the Roman emperor. Holding up a silver coin, Jesus asked: “Whose head is this?” Told it was Caesar’s, Jesus remarked: “Render to Caesar what is Caesar’s, and to God what is God’s.”
Holidays in this Civilization
In the epoch of civic religion, holidays commemorated important events in community life. The Romans designated nearly one third of the 355 days in their calendar as being holidays, when it was unlawful to conduct judicial or political business. The number of annual holidays increased to around 175 in the middle of the 4th century A.D. The city of Venice celebrates the “wedding of the Doge and the Sea” during its annual Ascension Day Fair, commemorating the Venetian doge’s victory over Dalmatian pirates in 1000 A.D. The English celebrate “Guy Fawkes Day” on November 5th with bonfires to burn an effigy of Guy Fawkes. He was the leader of a failed conspiracy to blow up Parliament with gunpowder on that date in 1605. Bastille Day in France celebrates the freeing of inmates from the Bastille prison on July 14, 1789. National holidays in the United States include Independence Day (when the Declaration of Independence was signed), Presidents’ Day (the birthdays of Presidents Washington and Lincoln), and the Martin Luther King holiday (birthday of the slain Civil Rights leader).
Religion in the Second Epoch of Civilization
The second civilization brought the infusion of philosophy into religion. Philosophy was the most advanced system of knowledge yet seen. Its uplifting spirit touched societies from China to Greece as a roughly contemporaneous group of great thinkers and seers championed the superior values of goodness, justice, and truth. Yet, this philosophy was working from within a more ancient religious culture. That culture, too, made a creative contribution to the emerging world religions. Like two parents, male and female, revolutionary philosophy and traditional religion merged their separate elements to create a new religious order based on truth and faith. The belief structure inherited from traditional religion was secure. Christian belief was planted in the concept of a monotheistic God inherited from Judaism. The Hebrew God Jehovah had proved his superior powers through the miracles demonstrated by Moses and Elijah’s contest with the priests of Baal. Likewise, the Buddhist and Hindu world religions presuppose a more ancient belief in the gods. Philosophy questioned previous religious practices and even God’s existence, but in the end settled down to produce a more sophisticated and ethically focused type of religion.
Greek philosophy had an enormous impact upon human thinking because it won the hearts and minds of an elite class that dominated societies in the far-flung lands ruled by Alexander and his successors. With its mathematical foundation and self-conscious methodology, this philosophy was considered to be the most sophisticated kind of knowledge in the world. It had captured the belief of the world’s most powerful and intelligent individuals, who saw in it a key to truth. However, the societies of India and China also went through a philosophical phase during the 6th and 5th centuries B.C. Every school of opinion had its representative. Those whom history remembers most were the moral philosophers and idealists with a definite prescription for what human beings ought to do.
The central figure in the Greek philosophical movement was Plato. Plato founded a school for philosophers in Athens. His theory was that, by their study of geometry, music, and other sublime subjects, the souls of young men would be converted from base interests to a love of goodness and truth. Plato believed that to know the pure idea of the good would make men want to be good. He compared education at the Academy with scaling a mountain and viewing the world from a higher perspective. It was the duty of philosophers, having glimpsed the higher truths, to go back down into society and make use of their vision. “For once habituated (to truth),” he wrote, “you (philosophers) will discern them (worldly conditions) infinitely better than the dwellers there, and you will know what each of the ‘idols’ is and whereof it is a semblance, because you have seen the reality of the beautiful, the just, and the good.” Plato believed that acquiring correct knowledge of ideas was the key to improving individual lives. It was a matter of immersing oneself in abstract reasoning, developing sublime habits of mind, and exercising them in daily life. Plato maintained that ideas were real - more real, in fact, than physical objects. In the Timaeus, he described an eternal realm of existence in which ideas were stored, like patterns to create things in the world.
Plato’s student, Aristotle, founded a rival school of philosophy. His treatise, Nicomachean Ethics, approached the concept of goodness from another direction. In Aristotle’s view, it was good to be happy. However, happiness was not so much the pursuit of immediate pleasure as an activity in accordance with virtue which brought long-term satisfaction. Ethical philosophy became a question of how to pursue happiness in a rational manner. Aristotle allowed that individuals had freedom of choice in setting happiness as an end. Ends he defined as “what we wish for”. Means were “what we deliberate about and choose” in pursuing those ends. Happiness as an end represents, therefore, fulfillment of one’s true desires. Desire creates value; it is a psychic mechanism which makes certain things personally important and stimulates the process of seeking to achieve them. If it is “good” to fulfill one’s desires, the mission of philosophy becomes to develop a strategy for doing so successfully.
The rational pursuit of happiness comes down to steps that an intelligent person might take to improve the chances of reaching this objective in an uncertain world. One eliminates pursuits such as immoderate wine-drinking which bring short-term pleasure with long-term pain. Most physical pleasures drop out of one’s inventory of desires. One also eliminates activities over whose outcome one has little control. For instance, the pursuit of another person’s love involves a high degree of risk and uncertainty. Therefore, no rational person would hope to achieve a goal of this sort. Philosophers would instead seek to fulfill intelligent desires - ones which put a person in the position of being able to will a successful pursuit of happiness. The most radical solution to this question was Buddha’s. He taught that the way to happiness, or avoidance of suffering, was to extinguish desires completely. If one has no desires, one cannot fail to achieve them. Short of that, a rational person might escape the unhappiness of unfulfilled desire by avoiding the kinds of emotional entanglements that bring pain.
By this criterion, the best kind of love is self-love. Having the source of reciprocation under one’s control, one can will satisfaction of desire. One is thus able to contain the flow of emotional energy and stay serene. Therefore, a life of equanimity, devoid of pity or joy directed towards one’s fellow human beings, was considered the most sensible approach for a philosophically directed person wishing to be happy. Indeed, that was the direction that the Graeco-Roman and Indian schools of philosophy ultimately took. Philosophers cultivated an attitude of mental detachment, suppressing sentiments of sympathy and love. Seneca said: “Pity is a mental illness induced by the spectacle of other people’s miseries ... The sage does not succumb to such-like mental diseases.” Epictetus told his disciples: “If you are kissing a child of yours ... never put your imagination unreservedly into the act and never give your emotion free rein ... Indeed, there is no harm in accompanying the act of kissing the child by whispering over him, ‘Tomorrow you will die.’”
The excessive rationality of this philosophy had to be tempered by other influences for the culture to survive. These influences came in the form of “superstitious” practices and beliefs encountered in conquered territories. Alexander’s armies brought back to Greece, besides booty, an interest in Babylonian astrology. The Romans conquered an immense territory which included the land of Judaea. And from Judaea came the strange tale of a man named Jesus who was crucified but rose again from the dead. That such a creed could meet and overcome the sophisticated philosophies of Rome was totally irrational; yet it happened. The Christian message seemed absurd to the Roman ruling class. Its principles of pacifism and submission to worldly authority were despised as “slaves’ virtues.” In Christianity, reason was offset by the softer human qualities of mercy and love. The Romans could not see any merit in this. Such doctrines seemed to be encouraging weak and irrational tendencies of character. Yet, the Apostle Paul observed in First Corinthians: “Divine folly is wiser than the wisdom of man, and divine weakness stronger than man’s strength.”
What Christianity gave to Rome’s philosophical culture was the element of personality. The masses of Roman society could relate more readily to a message of compassion than to admonitions aiming at mental detachment. This religion was especially popular with women and slaves. In the image of Jesus on the Cross, the suffering masses found a sympathetic model. His subsequent resurrection from the dead delivered a powerful message of hope. The Gospel story of Jesus presents the image of a man, not unlike a philosopher, who is highly intellectual and personally disciplined. Since this was not an image congenial with ordinary people, nascent Christianity borrowed other personal elements from rival cults in Roman society to achieve increased emotional appeal. From the Egyptian cult of Isis, Phrygian cult of Cybele, and others, it added the female image of an adoring mother, which was bestowed upon the Virgin Mary. Jesus’ role of saviour was foreshadowed by the Persian god Mithras who slew a bull. Devotees of the Greek god Dionysius, like Christians celebrating the Eucharist, believed that they were drinking the god’s own blood when they imbibed wine.
A similar process was meanwhile converting the philosophical religion of Buddha into a saviour cult having mass appeal. Buddha’s original teachings, which are preserved in Theravadin Buddhism, showed individuals how they might achieve Nirvana, or blissful escape from recurrent lives, through a shift in attitude. The problem was that, once the teacher had escaped to personal extinction, he was inaccessible to followers of his religion. A new branch of the religion, Mahayana Buddhism, developed in northwestern India during the first two centuries, A.D., before spreading overland to China. The Mahayana teaching held that the Buddha, out of compassion for his followers, had delayed his own departure from earth to help others achieve Nirvana. Buddha was thus transformed from a spiritual philosopher into a personal saviour. Though he was himself an atheist or, at least, a person uninterested in questions concerning deities or eternal life, Buddha eventually became in the religious culture a godlike figure endowed with miraculous, benevolent powers. The “bodhisattvas” were deities of lesser rank who would also respond to calls for help. Mahayana Buddhism, like Christianity, developed a practice of charitable works and a belief in a paradise for virtuous souls after death.
The older Hindu tradition of India became a polytheistic religion with a rich array of male and female personalities. Having survived the Buddhist challenge, it reorganized by incorporating elements from its rival religion. Previously, the Vedic rituals were intended to make the gods help people in some way. The Sanskrit literature had become rather technical. Revived Hinduism featured worship with an emotional bond between gods or goddesses and their devotees like that between the bodhisattvas and Buddhist worshipers. A triune of supreme deities - Brahma, Vishnu, and Shiva - ruled over a universe which balances creation and destruction. Shiva and his female consort, Kali, represented the forces of destruction and death. Vishnu, god of love, was a personal savior who was sometimes incarnated in a human being. Brahma was the original creator of the universe, beyond good and evil. Ganesha, the elephant-headed god of wisdom, was son of Shiva and Parvati, his benign consort. An outgrowth of earlier nature religion, the Hindu pantheon was united in a matrix of family-like relationships defined in literature and myth. Worshipers prayed to the particular gods or goddesses for particular purposes, brought offerings, or went to festivals in honor of them.
The religion of Islam was as austere in its concepts and practices as the Hindu religion was lush. In reaction to the Hellenic influences upon Christianity, Islam reaffirmed the uncompromising monotheism of Moses. There was no trinity of persons in the godhead, only one God. Except for theological refinements, the belief structure of this religion was similar to that of the Jewish and Christian religions. The righteous, who passed the Last Judgment, would be admitted after death to paradise. Islam’s prohibition of personal images in religion made it difficult to project personality in Islamic culture to the degree possible in other religions. Still, Mohammed himself had left an historical record which included some of this element. The personal ingredient in Islamic culture also comes from a rich tradition of poetry centered in persons who have submitted completely to God. They would include the great Persian poets Rumi and Hafiz, both members of the mystical Sufi brotherhood. Islam makes strict personal demands upon the worshiper. At the same time, Allah is considered to be a merciful and compassionate God who acknowledges the fact of human weakness.
Chinese religion began with a strong tradition of ancestor worship over which was laid philosophies of the 6th and 5th centuries B.C. Two native Chinese philosophies, Confucianism and Taoism, were joined in the 3rd century A.D. by a full-blown religion imported from India. This was Buddhism in its Mahayana form. Each religion had a literature to support its belief component. The followers of Confucius have the “Nine Classics”, which include five books attributed to Confucius himself and four written by others. Confucian literature is a mixture of historical, ethical, ritualistic, and metaphysical writings. The Taoist literature is focused on one book, the Tao-Te-Ching or “Book of the Way and of Virtue”, which is attributed to Lao-tse. While the Confucians tend to be rationalistic and the Taoists mystical, both philosophies became religious cults which elevated their founder to godlike status. Every large Chinese city had a temple where officials offered sacrifices to Confucius’ spirit. The Mahayana Buddhists engaged in a highly personal form of worship. Large statues or carvings of the Buddha adorned its temples and caves. The Taoists concocted magical potions and “elixirs of life” to gain personal immortality.
Belief is well established in all the world religions. An extensive theological literature details doctrinal positions on each aspect of religious life. Councils of clergy have been convened to define orthodoxy. Heretics have been identified and punished. It would be pointless to discuss this aspect further. With respect to personality, the supreme person is, obviously, God. Yet, God is not the main focus of personality in the world religions. What we know about God personally comes from an earlier scripture. When religion became philosophical, God became more like an idea. Aristotle wrote that “God’s self-dependent actuality is life most good and eternal.” In other words, God personifies eternal Goodness. Toynbee wrote that the higher religions all worship “a Reality that is one and the same behind its diverse aspects.” In other words, God’s being unifies all existence. The “ontological argument” of St. Anselm defined God as the most perfect being of whom we can conceive; God must therefore exist or else he would lack an essential attribute. This was playing with ideas. The consensus of religious opinion has been that God is beyond human understanding - an all-powerful yet personal being whom we cannot definitively know.
Since God is unknowable, personality in CivII would center, first, upon the prophets or founders of the several world religions and, second, upon a myriad of lesser figures in each religious tradition. Literature would be the vehicle for exhibiting these persons. As presented in the four Gospels, Jesus is a character in an intensely dramatic and intimate story. The lives of Mohammed and Buddha, and of Confucius and Lao-tse, are similarly known. The lives, sayings, and ideas of these great religious personalities come through most clearly in a verbal medium. Yet, religious culture has also made use of personal images. Mahayana Buddhism has been called “the religion of images.” Inspired by Greek models of visual art, statues of the Buddha were introduced in China in the 6th century A.D. Typically, this figure would be seated in lotus position, his right palm raised and his left palm lowered, with elongated ears and a fat, contented face. Christianity has, of course, the image of Jesus hanging on the Cross. It has the Madonna and child as portrayed in countless paintings. At Christmas, the creche recreates the scene of the Nativity.
With respect to the lesser personalities, we can start with famous hermits like St. Anthony or with spiritual acrobats like St. Simeon Stylites. Their daring exhibitions of self-discipline won the admiration of the entire Christian world. Indian religion provides holy men who perform conspicuous feats of telepathy or acts exhibiting insensitivity to physical pain. Medieval Europe was obsessed with the Virgin Mary. The period of Roman persecution had produced a large number of Christian martyrs and saints whose bones or other remains became objects of worship. Such relics, kept in a shrine or church, were thought capable of producing miraculous cures. Mass pilgrimages took place to visit the saintly remains. Some saints were heroic missionaries like St. Patrick and St. Columba. St. Francis of Assisi is known for his gentle nature and his way with animals. St. Theresa was a Christian mystic and Carmelite nun. Each holy man or woman exhibited the spirit of God in some personal way.
Holidays in this Civilization
Each major religion has established holidays to commemorate important events or persons. The Jewish Passover remembers the time when God killed each first-born child in Egypt but passed over the homes of Hebrews whose doorposts were smeared with blood. Moslems celebrate Mohammed’s birthday and the feast following the fast of Ramadan, the month when the Koran was first revealed. Buddhists celebrate the Buddha’s birth and death and the date of his achieving nirvana. The festivals of Holi and of Durga-Puja, in honor of the goddesses Vasanti and Kali respectively, are major Hindu holidays. Easter, which commemorates Christ’s resurrection, is the most important Christian holiday. In addition to Easter and Christmas, the calendar of Christian holidays includes days celebrating such events as Christ’s Ascension into Heaven, the Pentecost (when the early Christian community was filled with the Holy Spirit), and the Epiphany (revelation of divine power at Jesus’ baptism) in addition to commemorations of Christian saints. During the Middle Ages, it was considered a sin to do “servile work” on these holidays. Public worship and merriment instead took place.
Religion in the Third Epoch of Civilization
One would be tempted to say that Protestant Christianity was the religion of the third civilization. The cultural impetus for this epoch came from Europe, and the Protestant Reformation was Europe’s most important religious event. However, the spirit of this age was secular rather than religious. Renaissance humanism considered man, not God, to be the measure of all things. Its culture proclaimed the dignity and worth of the individual person. There was a revival of interest in pagan arts and letters bequeathed from classical antiquity. The Christian virtue of poverty gave way to rediscovered appreciation of wealth. Where medieval culture had embraced the philosophical tradition of exalting mind and hating the body, the human body became a beautiful object for artists to paint. Although the basic concepts of Christianity did not change, the institutional church faced a challenge to its authority. Religious faith became a matter, first, of royal determination and, then, of individual choice. It ultimately became a part of one’s ethnic heritage, which preserved the morals of a community.
Doubting Thomas would not believe that Jesus was resurrected from the dead until he had put his fingers inside Jesus’ wounds. Likewise, the attitude of intellectuals in the late medieval and Renaissance periods was that “seeing is believing.” Galileo saw that, contrary to Aristotle’s opinion, pendulums of varying widths but equal lengths swung at the same rate. He chose to believe the evidence of his own eyes rather than Aristotle. The scientific revolution of the 17th century took place at a time when Christian convictions had grown coercive and violent. European intellectuals were disgusted by the rancorous theological disputes that had led to the Thirty Years War. They wished to channel their creative energies into an area where reasonable men might agree on some points. Scientific truth in its modern incarnation is not a system of belief derived from a superior intelligence but a working hypothesis designed to fit an observed set of facts. The general pattern of knowledge is determined by these facts. The pattern is freely changed if a new set of facts should appear which contradicts existing theory. Science is therefore more a system of “anti-belief”, or a studied policy not to hold fast to its principles in the face of contradictory evidence, than of belief in a traditional sense.
Yet, because of its affinity to the natural world, this type of knowledge has been applied to technologies which visibly affect people’s lives. Scientific knowledge has literally changed the earth’s landscape. Because people can see its results, science has instant credibility. Although the initial scientific discoveries centered in astronomy, enlightened Europeans soon became interested in increasing agricultural yields through crop rotation and horse-drawn plows. The invention of the steam engine in the 18th century was applied to pumping water from mines and to large-scale textiles manufacturing. This required coal; and, to haul coal from the mines to its place of use, industrial engineers built inland canals and railroads through which steam-driven engines might pass. Discoveries in metallurgy developed stronger grades of iron and steel. Bridges were built of this metal, and then skyscrapers. Electricity lit up the cities and propelled trains in the subways. One could hardly doubt the power of scientific knowledge to create a world of marvelous convenience and wealth. Prosperity itself depended on finding and applying the knowledge more quickly than one’s competitors.
The natural sciences were related to a new type of philosophy that emerged during the 17th and 18th centuries. This “empiricist” philosophy, associated with such persons as Descartes, Pascal, Locke, and Hume, saw the world differently than the idealistic philosophies had. Where the idealists held that ideas existed independently and were the source of worldly objects, empiricist philosophers treated the mind as a mechanism which produces ideas. The purpose of philosophy became to discover through introspection how the mind worked. Mental processes were based upon operations of a bodily organ called the brain. The natural world itself was a huge, clock-like mechanism that operated according to causal laws. Locke, Rousseau, Montesquieu, and others were architects of a new system of government based upon a social contract, consent of the governed, and respect for property rights, as opposed to hereditary privilege. The principle of divided power ensured its safety and well being. David Hume and his disciple Adam Smith were pioneers of economic theory who argued that national wealth is maximized by removing restrictions upon commercial and personal freedom. These social philosophers created the belief systems supporting democratic government and free markets.
It is clear that in the third epoch of civilization people believed in money. Money was valuable and real. For all the questioning about God’s existence, however, few bothered to ask whether money existed. They just assumed that it did. Gold and silver underlay traditional forms of money, so European adventurers went to the New World to seek this kind of wealth. The Spanish king acquired tons of precious metals from American mines only to discover that his nation was becoming bankrupt. The huge increase in the supply of silver had produced severe monetary inflation. Each ounce of silver bought less than before. So silver money was not quite as solid as people had thought. Another revelation came in the early 1700s when the French state, exhausted from Louis XIV’s wars, thought it had run out of money. A Scottish financier, John Law, won support for a proposal to create new money by issuing bank notes backed by his own capital. This scheme succeeded beyond his wildest dreams. Public confidence had made the money real. But then, when Law’s bank was combined with a stock company to sell land in Louisiana, a speculative frenzy ensued which drove up the price of the shares to unsupportable levels. When the bubble burst, Law fled the country in disgrace. Another lesson in reality had been learned.
What was the nature of a nation’s true wealth? A Scottish economist, Adam Smith, attempted to answer this question in a book published in 1776. Wealth of Nations argued that wealth was not quantities of money but useful goods and services freely produced and sold to willing purchasers in the market place. The mercantilist scheme of running consistent surpluses in their trading accounts could not be practiced by all nations. Smith’s idea that free markets optimized national wealth and that governments should stay clear of this process to the greatest extent possible became the foundation of capitalistic faith. A half century later, Karl Marx proposed the opposite theory, that national economies would be better off if governments not only meddled in commercial markets but, indeed, took over the entire structure of production and managed things according to principles of economic “science”. Two competing ideologies, each backed by political regimes which had missiles and hydrogen bombs, polarized the world’s nations in the waning years of CivIII. It was this epoch’s equivalent of religious warfare.
The money-centered culture carried with it the additional belief that possession of wealth meant something in society. In ancient Chinese society, it had not. The merchants, though comfortable, were despised. In the post-Renaissance European and American societies, the stigma against money was largely removed. To be rich was an important qualification for high social standing. Still, the nouveau riche did not quite belong in a class with the old-money aristocrats who bore their wealth gracefully. Education was useful in repairing that deficiency. Children of the rich needed to acquire the trappings of nobility to make their status secure and complete. So, along with the belief in money went a belief in schools. The first generation might consist of boors or criminals who earned the money, while their children acquired a taste for the finer things in life. Learning to speak and write in grammatically correct sentences, being conversant with some of the civilization’s best works of literature and art, and having the money to do as one pleased - these marked being a gentleman.
Personality in the third epoch did not necessarily follow belief. As Graeco-Roman philosophy led to an ethical position devoid of human warmth, so empirical science lacks the element of engaging personality which mass culture seems to require. Disciplined to be objective, the experimental scientist leaves himself personally out of the picture. While a certain cult has formed around the person of Albert Einstein, scientists are perceived to be rather bland individuals who frequent laboratories and other antiseptic places. Only the proverbial “mad scientist” seems colorful enough to excite the popular imagination. Perhaps, then, we should be looking to the business community for heroes? They are the ones who have assembled the largest amounts of money. However, industrial, financial, and commercial leaders as a group lack the personal color and social appeal to inspire masses of people. They generally work behind the scenes of their commercial empires and have personalities almost as bland as the scientists. Occasionally, an Andrew Carnegie or Henry Ford will excite popular interest by their entrepreneurial boldness and philanthropy, but these are the exceptions.
The beginning period of this epoch was not lacking in heroes. Christopher Columbus, Vasco da Gama, Fernando Magellan, and Ponce de León went boldly to places where no European had previously set foot. The military conquests of Hernando Cortés and Francisco Pizarro were as dazzling as any in history. Still, these great historical figures did not personally excite the culture of CivIII as the founders and saints of religion had in CivII. The problem may be that they lacked a vehicle for exhibiting their personalities. Books of world history are not widely read. Where, then, did the third civilization obtain its personal images? A plausible answer is: from literature and art.
The Renaissance is known as a time when skilled artists and craftsmen produced paintings, buildings, sculpture, and other beautiful objects. Portrait painting became popular. So the art of creating visual images, always an important means of conveying personality, played a big part in CivIII from the beginning. To the visual arts were added intimate personal expressions in the form of love poems such as those which Dante wrote to Beatrice or Petrarch wrote to Laura. Shakespeare’s dramatic works presented an array of memorable characters. These were literate ways of delivering personality to a mass audience, albeit in a fictional mode. What was not fiction was the artist himself. Artists were persons of developed vision who had a talent for expressing it in a certain medium. They had personality in a different sense.
Raphael once said: “To paint a beautiful woman, I need to see many beautiful women ... But since there is a dearth ... I use as my guide a certain idea of the beautiful that I carry in my mind.” The artist’s expression is in one sense a naturalistic depiction of an object, but, as Raphael admitted, the artist intrudes with his own images and ideas of style. Since each artist carries inside him a different set of images, an artistic expression is stamped with the personality of its creator. Renaissance painters were aware of Plato’s conception of beauty, and it guided them in their art. The profession of artist was, therefore, more than simple craftsmanship. Artists were intellectuals cut from the same highbrow cloth as philosophers. However, unlike the philosopher who presents a universal image of truth, an artist expresses a personal vision through techniques related to his art. He develops a unique craft by patient exercise and experimentation. At the end of this process comes a cultivated habit representing the artist’s own style. That makes the artist himself a stylistic commodity which attracts a certain clientele.
After the invention of printing, it became possible to record a writer’s exact words and reproduce them in many copies. Numerous readers could thereby become intimately acquainted with the writer’s habits of mind. Printed newspapers with large circulations began to serialize the writings of well-known authors. Novelists were paid a certain sum of money for each written line, depending on their popularity. The reading public learned to recognize and appreciate the individual writers. Another kind of artist was the composer of music. Working with musical notations instead of words, he, too, developed personal style. One could recognize the composer from the music. Orchestral concerts made his works known. As visual artists, writers, and musicians over the years produced individually recognizable works, a tradition of artistic, musical, and literary culture appeared. Masterpieces of the visual arts became available to a wide audience through new technologies of reproduction such as chromolithography, photoengraving, and color printing. Player pianos and phonographic recordings reproduced orchestral music. Each self-respecting community acquired institutions of high culture, including symphony orchestras, opera houses, and museums, to display the cultural works in public.
Alexander Selkirk became a celebrity when he told his adventures of having been stranded for four years on the Juan Fernández island in the Pacific Ocean to Richard Steele, a journalist, who wrote up the story and published it in a London-based journal in 1713. Six years later, Daniel Defoe published a novel, Robinson Crusoe, based on the same experience. The characters, Tom Sawyer and Huckleberry Finn, which appear in Mark Twain’s novels of the same names, are notable personalities in American fiction. However, the authors themselves rather than characters appearing in their writings have been the main focus of CivIII personality. During the 19th century, books began to include pictures of authors on the title page as well as their names. Novelists such as Charles Dickens, William Makepeace Thackeray, and Mark Twain became cultural idols while touring on the lecture circuit. Serialized novels and the publication of many similar works by the same author created public expectations of him. Literary styles gained a following.
It may have been the English Romantic poets who first inspired the idea that a writer’s life could be as colorful and interesting as that of any character found in his writings. In the heady period when England led a coalition of nations to defeat Napoleon’s armies, Lord Byron cut a bold figure. A voracious womanizer and gambler who was perpetually in debt, he toured Europe during the French occupation and wrote a poetic account of his adventures, Childe Harold’s Pilgrimage, which made him internationally famous. Byron’s friendship with Shelley, his restless travels, and his heroic fight to liberate Greece from the Turkish empire during the last year of his short life, in addition to his prolific verse, combined to suggest an intellectually and emotionally intense personality which was attractive to youth. Poetry became a source of spiritual excitement with a strong connection to life. Young lovers spoke to each other’s hearts through this medium of beauty and truth. Prose literature ripened into the thick 19th century novel. Writers such as Dickens, Balzac, Hugo, and Tolstoy developed a specialty in describing the brutal lives of the poor. Art was in league with powerful political currents which called for the emancipation of humanity from wage slavery and other ills.
Education at the time of the Renaissance focused upon classical literature. Starting in 17th century, the schools paid increased attention to works written in contemporary languages which were both a stylistic model for writing and a source of national pride. Schoolchildren began to study their national literature. College students were encouraged to take courses in the liberal arts which would expose them to, in Matthew Arnold’s words, “the best that has been thought and said” in a nation’s cultural life. No student of English literature could be ignorant of poets such as Milton, Wordsworth, Shelley, or Keats. The orchestral music of Bach, Mozart, Handel, and Beethoven was familiar to all who had achieved any degree of cultural sophistication. It became a measure of a nation’s greatness how many of those creative giants its people had produced. The French led the way with a new style of prose writing which produced dramatists like Molière and essayists like Montaigne and Pascal. The English could boast of Shakespeare, Milton, and Pope. The Germans were known for their composers of music. The late 19th century French set the pace in experimental painting. Italian, Flemish, and Dutch painters provided more than their share of the “Old Masters” exhibited in museums. And, the nationalistic Americans claimed to have as good as or better than the Europeans in most areas.
Treasured as part of humanity’s cultural heritage and exhibited in liberal-arts courses, the works created by famous writers, artists, and musical composers loomed above the horizon of CivIII like stars in a heaven of beautiful expressions. It became a sign of personal sophistication and intelligence to be a consumer of this culture and to be able to understand and appreciate its exquisite design. To be conversant with names suggesting a high degree of intellectual refinement became the mark of an educated person. One had the worldly yet emotionally detached attitude of the connoisseur who knows excellence in its many forms. The artist, source of that excellence, became the center of cultural attention. Here was genius combined with an often colorful and glamorous personal life. Vincent Van Gogh’s bout with insanity, F. Scott Fitzgerald’s life in high society, Ernest Hemingway’s masculine pastimes, and Jack Kerouac’s beatnik lifestyle conveyed an image of living on the edge. (Public interest in Claude Monet increased when it was alleged that he had a mistress.) The idea took hold that one must have personally experienced life at the extremities to be able to write or paint so knowledgeably about it. And so, the hard-living, hard-drinking, womanizing artist or writer of the early 20th century captured the public imagination and helped to satisfy its hunger for personality.
Holidays in this Civilization
When civilization became commercial in the third epoch, opportunistic merchants turned Christian holidays into semiofficial occasions to sell merchandise. The cult of commercial Christmases finds scriptural support in Charles Dickens’ A Christmas Carol, featuring the character of Scrooge. In the original story, Scrooge is too tight-fisted to give his employee, Bob Crachit, time off from work to celebrate Christmas with his family. Since that theme does not suit contemporary business thinking, Scrooge has instead become someone who is too cheap to spend money on Christmas presents. Santa Claus, today a seasonal employee of department stores, once personified St. Nicholas, a Christian bishop of the 4th century. St. Valentine’s Day, which used to honor a saint of that name, has become a day for sweethearts to buy flowers or greeting cards for each other. Halloween is an occasion for giving candy to children or attending ghoulish costume parties. The purely commercial holidays include Mother’s Day, Father’s Day, Grandparent’s Day, and Secretaries’ week; they name the person who is to receive gifts. Washington’s birthday, no longer a national holiday, remains a day for merchants to offer special sales.
Religion in the Fourth Epoch of Civilization
“Fame is really our religion in America,” wrote UCLA literature professor Leo Braudy. “And we have a constantly changing calendar of saints whom we encounter in our media churches, especially films and television, which have the largest congregations.” To think of television or film entertainment as a religion may seem a stretch since few find their shows to be culturally edifying. Yet, in the fourth epoch of history, popular entertainment sets the cultural/spiritual agenda. The element of belief is barely visible here. Entertainment generally does not present a serious message or, except for the commercials, aim to convert someone to a particular point of view. It is “make believe” - an outgrowth of children’s play. From childhood on, people understand the value of this experience. When one is pretending to be someone or be in a particular situation, one safely experiences such things. Entertainment is emotionally stimulating without requiring a commitment of preparation or attention. One can relax, laugh, and have fun.
Personality is the strong suit of the entertainment culture. Films, sound recordings, radio, and television have an unusual ability to capture the sensuous aspect of personality and transmit its images to a large number of people. Radio listeners grow comfortable with the voices they hear at certain times of the day and with the imagined persons behind them. Television viewers are used to having certain individuals in their living rooms. The electronic media have thus extended our personal neighborhoods to include all the people whom we have come to know from their productions. The entertainers’ personalities are spread out before us in various roles. On the screen in a movie theater or on television appear the images of familiar persons, their faces and bodies in front of our eyes, close up or distant, in a variety of characters, stories, and scenes. We feel that we are with these people and know them personally.
For all the lonely persons in this world, the electronic image of the performers breaks down their sense of isolation and reconnects them with humanity. These performers have such vibrant personalities yet are accessible to ordinary people. Some like dancers and musicians have obvious talent. The film stars tend to be picked for more elusive personal reasons. Lana Turner reportedly was “discovered” by a Hollywood film mogul while working behind the counter at Schwann’s drug store. Mainly, actors and actresses are paid for “being themselves” and acting naturally, although having an attractive body and face helps. That being the case, the idea has taken hold that anyone can be a star. All it takes is being in the right place at the right time, plus a bit of luck.
In the previous civilization, personality appeared in the pale medium of printed words or, indirectly, in the texture of an artist’s creative vision. The technologies of electronic recording and communication have filled in the physical presence that was previously left to the reader’s imagination. The viewer or listener can now see or hear actual recordings of a scene. The personal image of the performer who stands before the camera or microphone crowds expression in these media. As the night sky disappears with the sun’s rising, so the old cult of musical composers, artists, and writers has faded with the arrival of electronic technologies that can deliver the far more vivid images of personality in performing artists exhibited in the full glare of sensuous detail. The focus of personality has therefore shifted from the person who conceives and writes a cultural work to the person who performs it. The performer’s direct personal image is so much more powerful than the hints of personality found in the works of pre-electronic culture.
There are still persons who write dramatic scripts serving the same function as ones written in Shakespeare’s time. They are the screenwriters for films and writers of scripts for made-for-television drama. Yet, neither the critics nor the general public seems to care about their artistry any more. Budd Schulberg, a screen writer who won a prize at the Deauville Festival of American Films, complained at the awards ceremony that only the film directors, never the screenwriters, were listed in the festival’s program. “If the play’s the thing, so is the movie script,” he argued. In the field of recorded music, the public spotlight falls upon the person who sings and records a popular song rather than the person who wrote it. Jerry Lieber and Mike Stoller, who wrote “Hound Dog” and other songs for Elvis Presley, remarked: “We always thought it was sort of pathetic that the originator of the song ended up with small potatoes and some decal, some imitator, would wind up making a bloody fortune off their efforts.”
The star system has conquered Hollywood. Because a performer has personal qualities that appeal to many people, he or she becomes a hot property in the motion-picture industry. Film proposals immediately become viable if one or another performer who had a recent box-office hit can be persuaded to play a leading role in the proposed picture. “If a star is very hot, “ said Kirk Douglas, “he can get a film made.” For that reason, the talent agents who represent actors and actresses of proven appeal have become top power brokers in Hollywood. In the new era of free agency which replaced the old studio system, they are involved in putting together the packages of talent - performers, directors, writers, musicians - needed to produce a film. The music industry runs on the injections of personality which recording artists put into their songs. Phonographic or tape recordings, which pick up each inflection in a singer’s voice, deliver the unique personal renditions of musical works. Vocal recordings generally outsell instrumental music because the listeners relate to people most of all.
As an intensely personal medium, electronic entertainment provides widely imitated models of personality. Elvis Presley’s sideburns and the Beatles’ interest in Transcendental Meditation affected personal fashions of their day. James Dean has long been a hero for rebellious youth. Many who cannot relate to other people on the basis of hobby or occupation may share an interest in the movies or television shows that they have seen. Entertainment experiences date people generationally. They are something for strangers to talk about in casual conversation. To have witnessed an important event on live television such as the shooting of Lee Harvey Oswald or man’s first steps on the Moon makes one feel connected to history. Professional sports make a ritual of uniting communities behind the home team. To be a rabid sports fan is an acceptable model of sociability - not quite as stuffy as some others. The obituary of a man who had once headed the tax department of a large corporation included this statement: “Despite 60 years in Minnesota, he remained a loyal (Green Bay) Packer fan. He began attending their games in 1925 and always happily recounted how he had a 48 yard line, 12th row seat during the 1929, 1930, and 1931 championship seasons.”
As far as belief is concerned, the entertainment industry stands ready to deliver whatever fantasies significant portions of the population might have, be they visions of adventure, easy money, or sexual intimacy. Tourism has changed from travel to places like the Grand Canyon or Williamsburg to what Hal Rothman calls “entertainment tourism”. He explains: “You walk through a tourist attraction and you know that you are the most important thing in it. It holds a mirror up to you and says that in this reflection you can be whatever you want to be. Las Vegas is the epitome of this.”
Professional sports feeds on the sense of fan loyalty combined with a tradition of “sandlot” baseball, school teams, and families that enjoy recreational pastimes together. Yet, its very success has driven up player salaries, ticket prices, and the cost of sponsoring televised games. When politicians in south Florida refused to build the Florida Marlins a new stadium with taxpayer money, the Marlins’ owner traded or sold off virtually every starter on its 1997 World Championship team with the result that the 1998 Marlins finished in last place. Heretofore, some parity in the playing ability of the two competing teams has been needed to maintain belief in the authenticity of athletic contests. However, the Major League teams located in smaller-sized cities cannot afford the player salaries which those in the larger cities pay. The Minnesota Twins have slashed player payrolls to achieve profitability at the risk of sacrificing team loyalty. To push the entertainment aspects of the event with special promotions like “Dog Days”, blimps, and pregame concerts may or may not overcome that basic deficiency.
David Sarnoff, who unveiled television broadcasting at the 1939 Worlds Fair, believed that this medium would improve the culture of the United States. “It is probable,” he said, “that television drama of high caliber and produced by first-rate artists will materially raise the level of dramatic taste of the nation.” This did not happen. Other values than artistic excellence kicked in. One might say that entertainment moguls today believe in the Nielsen ratings, especially viewer ratings within the prime demographic categories. Program producers use every trick in the book to keep viewers from switching channels, especially those who will be likely shoppers for the advertised products. The “eyeballs” must be held together at all costs. If newsworthy or educationally valuable programs attracted viewers, television would certainly air them; but since they do not, programs of lower quality which are cheaper to produce flood the airwaves.
Entertainment is simply entertainment - an attempt to amuse rather than instruct. True or not, whatever engages people’s interest and attention will continue to be produced. This is a sector which appreciates the well-done image. While a few stodgy people think that television or film should spread knowledge, George Page, a producer of television documentaries, has said: “Television and film can always only scratch the surface ... If you try to be definitive, you wind up with terrible television and terrible, unwatchable films.”
Yet, while movie scripts are basically fiction, the element of belief can also be important. A film such as Jurassic Park, which was based upon the implausible premise that dinosaurs were brought back to life, had to convince audiences that the script had a connection with reality. Director Steven Spielberg insisted that the success of this film depended as much upon its “scientific credibility” as the special effects. The idea was that DNA from a dinosaur’s blood had been sucked by a mosquito which was trapped in amber and preserved for 130 million years. Scientists extracted the DNA from the amber and used it to clone a dinosaur. In fact, at the same time that Jurassic Park was being filmed, biologists were successfully cloning an ancient bee from DNA preserved in amber.
Children learn through play to distinguish make-believe from reality. Not even this prepares them for the baffling situation that they face when exposed to products of the entertainment industry. A mother took her 3-year-old daughter to see Disney’s “Aladdin on Ice”. She reported that her daughter’s “excitement over ‘Aladdin’ was so intense that it rendered her speechless ... Midway through the first act, the little one finally spoke.
‘Mama, is that Aladdin the same one we have at home?’
‘What do you mean,’ I asked. ‘The Aladdin in our video, or the Aladdin Ken doll?’
‘The movie Aladdin, ding-dong head.’
‘Don’t call me ding-dong head,’ I said, and explained that the movie Aladdin was a cartoon of an imaginary Aladdin, whereas this Aladdin was a person on skates who was pretending to be an imaginary Aladdin.
‘Which one is real?’
Now I was rendered speechless. Which one, indeed. I’m awed by the layers of reality that I must navigate to explain pop culture to a preschooler.”
In the early days of live television, part of viewer excitement was knowing that, because the program was live, anything could happen. The performer could forget the lines or make an embarrassing blooper, and the audience was there to watch. There was a sense of anticipation missing in today’s taped shows. The experience was real. In those days the television industry stressed its unique ability to communicate with masses of people in the case of civil-defense emergencies. The networks sought respectability in quality news operations to offset criticism of their more profitable but inane entertainment shows. News is a type of programming weighted more heavily towards the “belief” end of the spectrum. The television viewer watches the news partly for the experience of being entertained by new and unusual events and partly for the purpose of monitoring a possibly useful stream of information. Because the news is “unscripted”, the public would be outraged if it were disclosed that news reporters simply made up their stories. In an era of staged media events, however, the line between real and fake news can sometimes be hard to detect.
Some types of entertainment depend on the authenticity of not knowing the outcome of a spectacle in progress. Participants in a lottery must have assurance that the winner has not already been selected when they place their bets. Even events such as game or quiz shows which are seen as pure entertainment must have a foundation in belief. When Charles Van Doren confessed that he had been coached to give the right answers on The $64,000 Question, it created a national scandal. During an athletic contest, one does not know which contestant will win. To know the final result of a game while watching it would detract from the suspense of experiencing an incomplete event. If the game is shot live, viewers know it could go either way. Credibility was stretched in the 1996 Summer Olympics when the sponsoring network, NBC, failed to disclose to viewers that it was airing a delayed tape of Kerri Strug’s gymnastics performance with an injured ankle. The commentators suggested that her points were needed for the U.S. team to win a gold medal. In truth, the commentators already knew the outcome. Strug’s routine, while gritty, was not actually needed to win the medal. NBC had decided to stress unfolding personal drama at the expense of reality.
Part of the value of watching an athletic contest is knowing that, because in any such contest there will be both winners and losers, the athletes are under real pressure to win. The uncertainty of victory both adds to the viewing excitement and produces true champions. A champion must learn to discipline himself or herself to perform well at a time when it counts. The performer must learn to control nervousness. The adrenalin must be flowing and the mind be sailing smoothly through a routine. While some talent and much practice are involved, the spectators also realize that a champion performer must be in the right frame of mind. There is an art to achieving this “mental edge”. Such concentration is difficult to achieve and the public knows it. Therefore, it is customary for television commentators to interview the winning athletes in the heat of their victory with the object of looking inside their heads. What was the athlete mentally trying to accomplish while making this superb effort? What special thoughts might have inspired it? How, indeed, does it feel to be a winner? The answers to such questions may be the closest that many in our time will come to witnessing divine inspiration.
With respect to scripted entertainment, people realize that its spectacle is mostly illusion. The tapes can be edited to remove blemishes and mistakes so that an artificial degree of perfection is achieved. The performing artist who is “up” for a performance may revert to being a disorganized wreck once the camera is turned off. After all, these are just actors - persons who make a living by pretending to be someone else. Still, people yearn to know the reality beneath the illusion. They eagerly read fan magazines, tabloid newspapers, or anything else that purports to show the real person behind the celebrity performer. The late-night television talk shows present an endless stream of guests from the entertainment industry. These guests make no claim to possessing wisdom or even advanced acting skills whose secret they will now reveal to audiences on the show. The public is interested in them simply as people - seeing the reality behind their public persona. A big thrill lately has been to watch the celebrities fall from a position of storybook success to the depths of humiliation, if not vile criminality. What else would account for the intense interest in Tonja Harding’s contract to injure a rival skater or in O.J. Simpson’s murder trial?
The world of electronically transmitted entertainment hangs like the Moon before people’s eyes, so close yet so far away. Though in some sense real, it cannot be personally touched. There once was a man from New York who regularly attended the live broadcast of a network radio show. Hoping to be “discovered” as a comic talent, he had memorized a number of snappy lines in case the host called upon a member of the audience to say something. He never was called upon. America’s premier industry, gambling, is built on the notion that, despite the unfavorable odds, “I can be that one in a million” who wins the lottery or jackpot. Millions of people are each willing to trade a small but real sum of money for the dubious chance of becoming a “big winner”. Phineas T. Barnum’s adage, “There’s a sucker born every minute”, sets the tone for this age. The master showman is not exactly deceiving us about his intentions. We are allowing ourselves to be deceived so long as this gives us and the children some good, clean fun.
Holidays in this Civilization
The holiday tradition has carried into the fourth epoch in the form of seasonal entertainment features. As Christmas has had several previous incarnations, it is not surprising to find television shows such as the Bob Hope Christmas Special at this time of the year. Bing Crosby’s “White Christmas” was the all-time bestselling record for five decades. Labor Day, transplanted from CivIII, has become known as a time when Jerry Lewis conducts his annual telethon to raise money for children who suffer from muscular dystrophy. New Year’s Day has become a time to watch collegiate football games and parades on network television. Superbowl Sunday is an example of a holiday established for purely entertainment purposes.
Religion in the Fifth Epoch of Civilization
While it is much too soon to say what will be the religion of the coming epoch, one can identify some of its elements in the computer culture seen to date. It appears that the fifth civilization faces a situation opposite that of CivIV. While the personality aspect is weak, belief is strong. Everyone believes in computers. Computer intelligence is perfect and quick. If a mistake appears in the result of its calculations, this must be the fault of the human programmer or someone who fed faulty data into the system. Wall Street believes in computers. It has valued the stock of a 20-year-old computer-software company, Microsoft, more than General Motors and made its 40-year-old chairman the richest man on earth. Parents buy home computers for their children, hoping that they will pick up skills to ensure future employability. Corporate America believes that computers can improve product quality and customer service while reducing costs.
With respect to personality, the computer has none. Cartoonists may joke about its feelings and thoughts, but even robots have a more people-friendly image. The types of people who work with computers are quite unlike the supercharged, attractive persons who work in the entertainment industry. They are “geeks” and “nerds” who while away their lives at a computer terminal. Those reclusive individuals who are addicted to “surfing the Internet” should come out every once in awhile into the sunlight of real life and meet some people in the flesh. One imagines them, stereotypically, to be rather owlish individuals who click on pornographic Websites for excitement or send hate-filled messages to one another. Alternatively, there are the “cyberpunks” who are a type of juvenile delinquent armed with the technical knowledge to steal from the telephone company or invade corporate data bases. It might be possible to make heroes out of these malicious hackers in a perverse “Robin Hood” sort of way.
If self-made billionaires had an attractive personal image, then the computer industry has delivered more than its share of this commodity. That appears not to be the case. Instead, we must look elsewhere in the computer culture for models of personality. Perhaps this is to be found in Jennifer Ringley’s enterprise. She is a 21-year-old woman living somewhere in Washington, D.C., who has installed a Quick Cam camera in her bedroom to record scenes from her daily life. She invites people to log on to her Website for regular photographic updates. Reportedly, this Website receives 100 million hits each week. Although there is some nudity, the chief attraction seems to be simple companionship.
The Internet is too diffuse to magnify personal images as the television networks do. Rather, its appeal is one of self-definition. Somewhere in the millions of E-mail addresses a person can find like-minded individuals, perhaps even soul mates, in each area of life. The trick is to know who we are and then learn to express our individual preferences and ideas to elicit a self-affirming response.
Summary of Individual Chapters
Chapter One: In Search of a Pattern in World History
An important part of historical study is the task of finding a design in the record of human experience. World history is embodied in a set of stories. The stories tell how humanity has progressed from one situation to another - from a less to a more complex type of society.
The crux of the matter is to determine the turning points of history. They are times which mark a dividing line between two different types of culture. In contrast with histories centering in the experience of particular nations, this one follows changes in the values and structure of society.
The introduction of new cultural technologies creates a space for new types of public experience. They define the successive epochs of world history. Civilizations are not societies which rise and fall in recurring cycles but cultural systems which build upon the work of their predecessors. Curiously, these civilizations appear to be worldwide. That makes it possible to view world history with a single focus.
Chapter Two: Institutions Differentiating within Society
The flow of world history follows the creation of an increasingly complex society. Ever more specialized institutions appear.
When civilizations first arose in the eastern Mediterranean area, civilized societies were ruled by institutions that combined political and religious authority. During the first historical epoch, the political function split from the religious. Royal governments created territorially extended empires by force of arms.
However, the experience of military violence, cruelty, and injustice produced a yearning for a more rational and peaceful world. Philosophers expressed such ideals. In time, philosophy found an outlet in religion. There followed an age of idea-based religions which transcended nationality - the so-called "world religions".
Subsequently, these religions became contending empires which fought for worldly power. Then came a movement away from spiritual strife as humanity embraced a more sensuous, and commercial set of pursuits.
The epoch of European dominance, beginning in the 15th century A.D. featured values centered in wealth and in the cultural trappings of wealth. Western expansion brought all the world's people in touch with each other for the first time.
After two bloody wars, this third civilization began to dissolve in the new culture of popular entertainment. Making people have fun became a serious business. Gaining and keeping their attention became a road to power and wealth.
Chapter Three: Personality and Belief
The institutions of government, world religion, commerce and education, and popular entertainment have a spiritual side which is tied to their belief systems and perceptions of attractive personality. Each historical epoch has its own "religion" in a broad sense.
A religion has beliefs concerning fundamental questions. It also promotes certain models of personality.
The nature worship of tribal peoples gave way to "the worship of one's own collective human power." Religion was in the hands of government.
Then prophets and philosophers challenged civic authority. They created a new kind of religion which could be formulated in creeds. Fidelity to those creeds offered a path to Heaven.
"Religion" in the epoch of commerce and education turned to things of this world. People believed in education, money acquired through successful careers, and the creative greatness of artists and musicians.
The invention of electronic technologies capturing sensuous images has created a culture of immediate spectacles which the community can share. The world of big-time entertainment offers fame and fortune to the lucky performers who find a place in its shows; but, as the gossip columns reveal, these glamorous individuals may have their share of problems.
Chapter Four: A Short History of Civilization I
The history of the first civilization is a history of government. It includes the experience of wars and changing political dynasties. This is history as it is commonly understood.
Monarchical government began with the foundation of the early city-states. It grew to the size of empire when the localities came in conflict with each other. Certain kings prevailed in these wars. The losers were defeated and enslaved.
Like a pair of book ends, the multi-millennial reign of imperial governments in Egypt and China frames the beginning and end of this epoch.
Western peoples look back to Rome for their model of empire. Before that, bloody empires rose and fell with some frequency in the Middle East: The Hittites, Assyrians, Babylonians, Medes, Persians, and Hellenistic Greeks were among those competing for power.
The Achaemenian Persian empire was revived under Parthian and Sasanian kings before succumbing to the armies of Islam.
India had two short-lived indigenous empires before foreign Mogul and British rulers unified the subcontinent. Imperial splendor reached a peak in the 3rd century, A.D.
By the 7th century A.D., only the Greek Byzantine and Chinese imperial dynasties remained to represent this civilization in the Old World.
Balance-of-power diplomacy prevented a revival of empire in Europe. Only religion could bind diverse peoples in a community.
Chapter Five: A Short History of Civilization II
The history of the second civilization began in that remarkable intellectual and moral awakening that occurred in scattered societies during the 6th and 5th centuries B.C. Spiritually advanced persons who lived then have left their teachings to posterity.
The culminating event of this epoch was the establishment of three world religions - Buddhism, Christianity, and Islam - and the ideological transformation of earlier religions such as Judaism and Hinduism. In league with government, these religions staked out territories of influence.
This type of religion was driven by ideas rather than rituals. Besides the founder's teachings, religious doctrine includes the work of interpreters who evaluate, codify, and explain doctrinal positions.
Religion has, however, a worldly side in the hierarchies of clergy who staff its institutions. Here ideological zeal and ambition may lead to a result at variance with the beneficial and peaceful values at the core of the religion.
Toward the end of the second epoch, Christian crusaders went to war against Moslems who ruled the Holy Land. Moslems and Hindus fought for control of India. Buddhists and others cultivated the martial arts.
Meanwhile, communities of mystics, monks, and scholars practiced the hard discipline of a spiritually centered life. Their quiet experiences, too, are part of the history of this second civilization.
Chapter Six: A Short History of Civilization III
The third civilization began with an awakening which has been called the Renaissance. Its culture originated in northern Italy where commercial success was combined with a taste for classical scholarship and exquisite art.
European influence spread with the Portuguese and Spanish voyages of transoceanic discovery. West Europeans colonized lands in the New World which Columbus had "discovered" en route to the Orient. Rival nations bordering the north Atlantic fought for control of the trade in oriental spices.
Later, a brisk trade in rum, coffee, and tobacco developed between Europe and its colonies in North America and the Caribbean islands. Slaves imported from Africa were put to work producing commodities for export.
The savage warfare between Protestants and Catholics caused European intellectuals to shun religious controversies and pursue secular learning. Scientific discoveries brought technological innovations that transformed manufacturing and transportation.
Industrialized societies gained wealth while developing social rifts. The laboring class asserted itself through strikes. Parliamentary governments challenged the authority of kings. Wars and revolutions advanced ideals of progress against the old order.
Having defeated Spain on the seas and France in land battles fought in India and North America, Great Britain became the world's leading colonial power. Challenged by Prussian Germany, this sea-based nation threw the flower of its youth into a continental war from which it never fully recovered. Its former colony, the United States of America, filled the power vacuum.
Anticolonial movements in the 19th and 20th centuries brought political independence to peoples in South America, Asia, and Africa.
Chapter Seven: A Short History of Civilization IV
It may seem strange to suggest that entertainment is the basis of a new civilization replacing that of the past five hundred years. Yet, signs of its cultural dominance are compelling.
This historical epoch began with the minstrel shows, freak shows, and circuses of the previous century and with popular sporting events such as horse races, boxing matches, and baseball games. Spectacular exhibitions such as the Crystal Palace in 1851 attracted large crowds.
However, it was the invention of electronic devices to record and transmit images of sight and sound which created the new popular culture.
After serious ideas had led to the carnage of two world wars, Americans wanted something a bit lighter. Some people enjoyed themselves at Broadway theaters or in clubs featuring jazz music. Others followed the careers of Babe Ruth and Joe DiMaggio on the baseball diamond.
Movies came of age in the 1920s when they acquired sound. Commercial radio stations began broadcasting music, news, and light entertainment. The creative ferment between black and white people's entertainment brought forth rock 'n roll music. Television broadcasts, begun after World War II, became the center of attention in many households. The lure of easy money fueled a gambling craze.
Entertainment tastes became diversified: some enjoy shows suitable for "families" while others prefer "adult entertainment".
Computer-generated images create new vistas of visual excitement. There are new opportunities to have fun with illusion-producing machines.
Chapter Eight: The Impact of Cultural Technologies upon Public Experience
The reason that the introduction of new cultural technologies is linked to new civilizations is that, in delivering an image or message to an audience, these technologies create their own type of experience, coloring it in certain ways. Certain institutions depend on their communicative service.
Government bureaucracies employ the technology of writing. The invention of the alphabet put written language into the hands of merchants and others leading active lives. The exposure to visual symbols suggested to some philosophers that abstractions had an independent existence.
Printing increased literacy. It fostered a more precise way of thinking, so important to modern science. Well-known authors came to acquire cult followings.
That changed when the technologies of film production, music recording, and radio and television broadcasting brought the personal images of performers into full view. Famous people were packaged and sold as image commodities.
With the development of computers, the culture is again set to change. Perhaps the individual experience of interactivity and connectedness will spawn a new set of public values.
Chapter Nine: A Short History of Cultural Technologies
Written language was invented in ancient Mesopotamia as a means of recording commercial transactions. The same set of symbols was used to express numbers and words.
Ideographic writing began when scribes chose different symbols for the quantities and types of commodities. Phonetic elements crept into writing driven by a need to express abstract concepts. In some scripts, the symbols expressed syllabic sounds.
The alphabet, whose letters represent the pure sounds of speech, first appeared in the Middle East during the 2nd millennium B.C. Two Semitic peoples, the Phoenicians and Aramaeans, carried its technology to distant places in trading expeditions.
The Phoenician alphabet gave rise to the Greek and Latin alphabets, parent of most European scripts. Far Eastern societies have retained the earlier ideographic or syllabic system of writing.
Printing came to the West from east Asia. Gutenberg's pioneering use of movable type sparked an explosion of printed literature. Mass-circulation newspapers appeared in the 19th century.
Photography and telegraphy, invented in the 1830s and 1840s, were among the first technologies to use chemical processes or electrical signals to capture or express images and words. The phonograph and motion-picture machine delivered a series of images in time.
Radio and television broadcasting sent messages through the air waves to persons with receivers tuned to particular frequencies. There came to be a culture of live images connecting the performers with mass audiences.
The computer, first developed during World War II, has grown in processing speed and capacity while becoming physically miniaturized.
Chapter Ten: Using History to Predict the Future
Can world history be used to predict the future? If the future resembles the past, perhaps so. Otherwise, a way to anticipate coming events in our civilization might be through analogy with other civilizations in a similar phase.
Each of the four world civilizations whose history is already known follows a life cycle. Generally, its period of exuberant, creative expansion is followed by a maturing phase that brings empire. This, in turn, leads to violence and coercion as an attempt to retain worldly power.
One also discerns a pattern by which institutions emerging in one period are fundamentally altered two epochs later.
Historians are wanting to distinguish historical turning points from ephemeral changes in the culture. Besides the appearance of major new cultural technologies, this book identifies other conditions that are present in times of fundamental change:
First, the new civilizations arise in an environment of political parochialism and vigorous commerce.
Second, this environment produces important innovations in mathematics and commercial practice.
Third, there are expanded geographical horizons. People's imaginations are excited by perceptions of a wider world.
Chapter Eleven: Intimations of a Fifth Civilization
The computer age is upon us. Though in its infancy, we know this epoch will bring distinct changes to society. To predict the future of this civilization, one can anticipate impacts arising from the nature of the technology.
Already there is interest in the commercial application of computers. One can envision powerful new modes of selling and distributing products which gives consumers more information, choice, and control.
Education is another area which foreseeably will be transformed. Computers give students the ability to interact individually with a mechanized teaching source. They also have an unlimited capacity to replicate lessons. Shortages of high-quality education could be a thing of the past.
The most profound result of computers may be as a tool allowing man to remake himself. Computers can control the extensive information contained in the structure of DNA molecules. They have the potential to replicate processes of the human mind.
In this "Frankenstein civilization", man and machine will forge a common future which is at once dangerous and exciting in its far-reaching possibilities.