Mythorelics

Taoist mythology, Lanna history, mythology, the nature of time and other considered ramblings

My Photo
Name:
Location: Chiangrai, Chiangrai, Thailand

Author of many self-published books, including several about Thailand and Chiang Rai, Joel Barlow lived in Bangkok 1964-65, attending 6th grade with the International School of Bangkok's only Thai teacher. He first visited ChiangRai in 1988, and moved there in 1998.

Friday, March 30, 2018

Has agriculture promoted addictive behavior?

It is generally accepted that the history of agriculture began more than 10,000 years ago.
Major forms of agriculture arose independently in at least seven different areas of the world. Each depended on a different mix of plants and non-human animals. Moreover, there are different mixes of horticulture, or small-scale gardening, with other activities, like gathering and hunting.
According to long-standing archaeological theory, there are eight plants that are considered the domesticate “founder crops” in the story of the origins of agriculture on our planet. All eight arose in the Fertile Crescent region (what is today southern Syria, Jordan, Israel, Palestine, Turkey and the Zagros foothills in Iran) during the Pre-Pottery Neolithic period some 11,000-10,000 years ago. The eight include three cereals (einkorn wheat, emmer wheat, and barley); four legumes (lentil, pea, chickpea and bitter vetch); and one oil and fiber crop (flax or linseed). Figs might be a ninth.
The origins of agriculture weren’t just in the western and northern Fertile Crescent (along the Euphrates), but also in the foothills of the Zagros Mountains of Iran (in the eastern Fertile Crescent). At the site of Chogha Golan there, charred plant remains (including wild barley, goat-grass and lentil) dating from 11,700 to 9,800 years ago were excavated. 9.800 years ago, domesticated emmer wheat appeared. Evidence has also been found in North Africa, Israel and Syria, of pea, chick pea, bitter vetch, and flax from 10,000+ BCE.
By 6000 BCE agriculture was developed independently in the Far East, with rice as the primary crop.
The Kebaran culture, the eastern Mediterranean area (c. 16,000 to 10,500 BCE), named after its type site, Kebara Cave south of Haifa, is associated with the use of the bow and arrow and the domestication of the dog. The Kebaran is characterized by the earliest collecting of wild cereals, known due to the uncovering of grain grinding tools. The Natufian culture which followed existed from around 12,500 to 9,500 BCE, with a sedentary or semi-sedentary population (they were the original builders of Jericho, the world’s first city) even before the introduction of agriculture. Some evidence suggests deliberate cultivation of cereals, specifically rye, by the Natufian culture, at Tell Abu Hureyra, the site of earliest evidence of agriculture in the world. Generally, though, Natufians exploited wild cereals. Animals hunted included gazelles.
Before the Natufian roaming hunter gatherers, known as the Kebaran culture, trudged the Mediterranean coast as their ancestors had for thousands upon thousands of years. What the Natufians appear to have done, and were possibly the first to have done, is to say “Enough! I’m stopping here thank you,” and build villages.

These villages are very small by modern standards, no more than forty meters across and with populations of less than a couple of hundred people. There was almost nothing like them before. The houses are crude, more like shacks, but they do show a surprising degree of care in their organization and maintenance. Also, whilst there’s no pottery there are stone bowls and grinding stones.
Perhaps most importantly these seem to have been all year round settlements. The teeth of hunted animals such as gazelle show that both summer and winter kills being brought back to the villages. Also, there’s plenty of evidence for house mice and rats in numbers appropriate to a village occupied all year.

Connections with the Neolithic:
What intrigues archaeologists most about the Natufian is that, just under three thousand years later, the Neolithic, with the first evidence in the world of agriculture, started in pretty much the same place. It’s difficult not to see the two events as connected, with one leading to the other. The question that archaeologists have asked since the discovery of these earliest villages is “Why then? Why there?” A number of factors have been suggested.
The start of the Natufian culture, 12,500 BCE, also marks the end of the last glaciation. The world’s temperatures rapidly increased at this time by several degrees. This seems to have increased rainfall in the Levant, extending woodland further inland, although at the same time a rise in sea level might have countered this effect a bit. The improvement in climate did, however, reverse around 11000 BCE, when the “Younger Dryas” event caused a return to cold weather for the next thousand or so years. It was only after the weather warmed up again that agriculture started.
Agriculture changed society in many ways: primarily through division of labor but also through limiting mobility and the variety it could bring. Exchange become more important, considered and unavoidable. Storage hadn’t been much of an issue before, now it was, as was protection of what was stored. There’s been discussion of resultant social and sexual inequality, despotism, militancy, increased disease incidence and declining health.

Proposed Centers of Origin of Some Crops:
1. Near East (Fertile Crescent)- wheat and barley, flax, lentils, chickpea, figs, dates, grapes, olives, lettuce, onions, cabbage, carrots, cucumbers, and melons; fruits and nuts.
2. Africa- Pearl millet, Guinea millet, African rice, sorghum, cowpea, Bambara groundnut, yam, oil palm, watermelon, okra.
3. China- Japanese millet, rice, buckwheat, and soybean (about 7000 BCE).
4. South-east Asia- wet- and dryland rice, pigeon pea, mung bean, citrus fruits, coconut, taro, yams, banana, breadfruit, coconut, sugarcane.
5. Mesoamerica and North America- maize, squash, common bean, lima bean, peppers, amaranth, sweet potato, sunflower.
6. South America- lowlands: cassava; mid-altitudes and uplands (Peru): potato, peanut, cotton, maize.


Adapted from “The origins of agriculture: a biological perspective and a new hypothesis” by Greg Wadley and Angus Martin, published in Australian Biologist 6: 96-105, June 1993

There’s never been agreement on the nature and significance of the rise of civilization, yet the questions posed by the problem are fundamental. How did civilization come about? What animus impelled man to forego the independence, intimacies, and invariability of tribal existence for the much larger and more impersonal political complexity we call the state? What forces fused to initiate the mutation that slowly transformed nomadic societies into populous cities with ethnic mixtures, stratified societies, diversified economies and unique cultural forms? Was the advent of civilization the inevitable result of social evolution and natural laws of progress or was man the designer of his own destiny? Have technological innovations been the motivating force or was it some intangible factor such as religion or intellectual advancement?

To a very good approximation, every civilization that came into being had cereal agriculture as its subsistence base, and wherever cereals were cultivated, civilization appeared. Some hypotheses have linked the two. For example, Wittfogel’s (1957) ‘hydraulic theory’ postulated that irrigation was needed for agriculture, and the state was in turn needed to organize irrigation. But not all civilizations used irrigation, and other possible factors (e.g. river valley placement, warfare, trade, technology, religion, and ecological and population pressure) have not led to a universally accepted model.
Prompted by a possible link between diet and mental illness, several researchers in the late 1970s began investigating the occurrence of drug-like substances in some common foodstuffs. Dohan (1966, 1984) and Dohan et al. (1973, 1983) found that symptoms of schizophrenia were relieved somewhat when patients were fed a diet free of cereals and milk. He also found that people with coeliac disease - those who are unable to eat wheat gluten because of higher than normal permeability of the gut - were statistically likely to suffer also from schizophrenia. Research in some Pacific communities showed that schizophrenia became prevalent in these populations only after they became ‘partially westernized and consumed wheat, barley beer, and rice’ (Dohan 1984).
Groups led by Zioudrou (1979) and Brantl (1979) found opioid activity in wheat, maize and barley (exorphins), and bovine and human milk (casomorphin), as well as stimulatory activity in these proteins, and in oats, rye and soy. Cereal exorphin is much stronger than bovine casomorphin, which in turn is stronger than human casomorphin. Mycroft et al. (1982, 1987) found an analogue of MIF-1, a naturally occurring dopaminergic peptide, in wheat and milk. It occurs in no other exogenous protein. (In subsequent sections we use the term exorphin to cover exorphins, casomorphin, and the MIF-1 analogue. Though opioid and dopaminergic substances work in different ways, they are both ‘rewarding’, and thus more or less equivalent for our purposes.)
Since then, researchers have measured the potency of exorphins, showing them to be comparable to morphine and enkephalin (Heubner et al. 1984), determined their amino acid sequences (Fukudome &Yoshikawa 1992), and shown that they are absorbed from the intestine (Svedburg et al.1985) and can produce effects such as analgesia and reduction of anxiety which are usually associated with poppy-derived opioids (Greksch et al.1981, Panksepp et al.1984). Mycroft et al. estimated that 150 mg of the MIF-1 analogue could be produced by normal daily intake of cereals and milk, noting that such quantities are orally active, and half this amount 'has induced mood alterations in clinically depressed subjects' (Mycroft et al. 1982:895). (For detailed reviews see Gardner 1985 and Paroli 1988.)

Most common drugs of addiction are either opioid (e.g heroin and morphine) or dopaminergic (e.g. cocaine and amphetamine), and work by activating reward centers in the brain. Hence we may ask, do these findings mean that cereals and milk are chemically rewarding? Are humans somehow 'addicted' to these foods?

Problems in interpreting these findings:
Discussion of the possible behavioral effects of exorphins, in normal dietary amounts, has been cautious. Interpretations of their significance have been of two types:
where a pathological effect is proposed (usually by cereal researchers, and related to Dohan’s findings, though see also Ramabadran & Bansinath 1988), and
where a natural function is proposed (by milk researchers, who suggest that casomorphin may help in mother-infant bonding or otherwise regulate infant development).

We believe that there can be no natural function for ingestion of exorphins by adult humans. It may be that a desire to find a natural function has impeded interpretation (as well as causing attention to focus on milk, where a natural function is more plausible). It’s unlikely that humans are adapted to a large intake of cereal exorphin, because the modern dominance of cereals in the diet is simply too new. If exorphin is found in cow’s milk, then it may have a natural function for cows; similarly, exorphins in human milk may have a function for infants. But whether or no, adult humans don’t naturally drink milk of any kind, so any natural function can’t apply.
Our sympathies therefore lie with the pathological interpretation of exorphins, whereby substances found in cereals and milk are seen as modern dietary abnormalities which may cause schizophrenia, coeliac disease or whatever. But these are serious diseases found in a minority. Can exorphins be having an effect on humankind at large?
Other evidence for 'drug-like' effects of these foods”:
Research into food allergy has shown that normal quantities of some foods can have pharmacological, including behavioral, effects. Many people develop intolerances to particular foods. Various foods are implicated, and a variety of symptoms is produced. (The term ‘intolerance’ rather than allergy is often used, as in many cases the immune system may not be involved (Egger 1988:159). Some intolerance symptoms, such as anxiety, depression, epilepsy, hyperactivity, and schizophrenic episodes involve brain function (Egger 1988, Scadding & Brostoff 1988).
Radcliffe (1982, quoted in 1987:808) listed the foods at fault, in descending order of frequency, in a trial involving 50 people: wheat (more than 70% of subjects reacted in some way to it), milk (60%), egg (35%), corn, cheese, potato, coffee, rice, yeast, chocolate, tea, citrus, oats, pork, plaice, cane, and beef (10%). This is virtually a list of foods that have become common in the diet following the adoption of agriculture, in order of prevalence. The symptoms most commonly alleviated by treatment were mood change (>50%) followed by headache, musculoskeletal and respiratory ailments.
One of the most striking phenomena in these studies is that patients often exhibit cravings, addiction and withdrawal symptoms with regard to these foods (Egger 1988:170, citing Randolph 1978; see also Radcliffe 1987:808-10, 814, Kroker 1987:856, 864, Sprague & Milam 1987:949, 953, Wraith 1987:489, 491). Brostoff and Gamlin (1989:103) estimated that 50% of intolerance patients crave the foods that cause them problems, and experience withdrawal symptoms when excluding those foods from their diet. Withdrawal symptoms are similar to those associated with drug addictions (Radcliffe 1987:808). The possibility that exorphins are involved has been noted (Bell 1987:715), and Brostoff and Gamlin conclude (1989:230):
“... the results so far suggest that they might influence our mood. There is certainly no question of anyone getting ‘high’ on a glass of milk or a slice of bread - the amounts involved are too small for that - but these foods might induce a sense of comfort and wellbeing, as food-intolerant patients often say they do. There are also other hormone-like peptides in partial digests of food, which might have other effects on the body.”
There’s no possibility that craving these foods has anything to do with the popular notion of the body telling the brain what it needs for nutritional purposes. These foods weren’t significant in the human diet before agriculture; large quantities of them cannot be necessary for nutrition. In fact, the standard way to treat food intolerance is to remove the offending items from the patient’s diet.

A suggested interpretation of exorphin research:
But what are the effects of these foods on normal people? Though exorphins cannot have a naturally selected physiological function in humans, this does not mean that they have no effect. Food intolerance research suggests that cereals and milk, in normal dietary quantities, are capable of affecting behavior in many people. And if severe behavioral effects in schizophrenics and coeliacs can be caused by higher than normal absorption of peptides, then more subtle effects, which may not even be regarded as abnormal, could be produced in people generally.

The evidence presented so far suggests the following interpretation:
The ingestion of cereals and milk, in normal modern dietary amounts by normal humans, activates reward centers in the brain. Foods that were common in the diet before agriculture (fruits and so on) do not have this pharmacological property. The effects of exorphins are qualitatively the same as those produced by other opioid and / or dopaminergic drugs, that is, reward, motivation, reduction of anxiety, a sense of wellbeing, and perhaps even addiction. Though the effects of a typical meal are quantitatively less than those of doses of those drugs, most modern humans experience them several times a day, every day of their adult lives.

Hypothesis: exorphins and the origin of agriculture and civilization:
When this scenario of human dietary practices is viewed in light of the problem of the origin of agriculture, it suggests an hypothesis that combines the results of these lines of enquiry. Exorphin researchers, perhaps lacking a long-term historical perspective, have generally not investigated the possibility that these foods really are drug-like, and have instead searched without success for exorphin’s natural function. The adoption of cereal agriculture and the subsequent rise of civilization haven’t been satisfactorily explained, because the behavioral changes underlying them have no obvious adaptive basis.
These unsolved and until-now unrelated problems may in fact solve each other. The answer, we suggest, is this: cereals and dairy foods are not natural human foods, but rather are preferred because they contain exorphins. This chemical reward was the incentive for the adoption of cereal agriculture in the Neolithic. Regular self-administration of these substances facilitated the behavioral changes that led to the subsequent appearance of civilization.

This is the sequence of events that we envisage:
Climatic change at the end of the last glacial period led to an increase in the size and concentration of patches of wild cereals in certain areas (Wright 1977). The large quantities of cereals newly available provided an incentive to try to make a meal of them. People who succeeded in eating sizeable amounts of cereal seeds discovered the rewarding properties of the exorphins contained in them. Processing methods such as grinding and cooking were developed to make cereals more edible. The more palatable they could be made, the more they were consumed, and the more important the exorphin reward became for more people.
At first, patches of wild cereals were protected and harvested. Later, land was cleared and seeds were planted and tended, to increase quantity and reliability of supply. Exorphins attracted people to settle around cereal patches, abandoning their nomadic lifestyle, and allowed them to display tolerance instead of aggression as population densities rose in these new conditions.
Though it was, we suggest, the presence of exorphins that caused cereals (and not an alternative already prevalent in the diet) to be the major early cultigens, this does not mean that cereals are ‘just drugs’ They have been staples for thousands of years, and clearly have nutritional value. However, treating cereals as ‘just food’ leads to difficulties in explaining why anyone bothered to cultivate them. The fact that overall health declined when they were incorporated into the diet suggests that their rapid, almost total replacement of other foods was more due to chemical reward than to nutrition.
It’s noteworthy that the extent to which early groups became civilized correlates with the type of agriculture they practiced. That is, major civilizations (in south-west Asia, Europe, India, and east and parts of South-East Asia; central and parts of north and south America; Egypt, Ethiopia and parts of tropical and west Africa) stemmed from groups which practiced cereal, particularly wheat, agriculture (Bender 1975:12, Adams 1987:201, Thatcher 1987:212). (The rarer nomadic civilizations were based on dairy farming.) Groups which practiced vege-culture (of fruits, tubers etc.), or no agriculture (in tropical and south Africa, north and central Asia, Australia, New Guinea and the Pacific, and much of north and south America) did not become civilized to the same extent.
Thus major civilizations have in common that their populations were frequent ingesters of exorphins. We propose that large, hierarchical states were a natural consequence among such populations. Civilization arose because reliable, on-demand availability of dietary opioids to individuals changed their behavior, reducing aggression, and allowed them to become tolerant of sedentary life in crowded groups, to perform regular work, and to be more easily subjugated by rulers. Two socioeconomic classes emerged where before there had been only one (Johnson & Earle 1987:270), thus establishing a pattern which has been prevalent since that time.

The natural diet and genetic change:
Some nutritionists deny the notion of a pre-agricultural natural human diet on the basis that humans are omnivorous, or have adapted to agricultural foods (e.g. Garn & Leonard 1989; for the contrary view see for example Eaton & Konner 1985). An omnivore, however, is simply an animal that eats both meat and plants: it can still be quite specialized in its preferences (chimpanzees are an appropriate example). A degree of omnivory in early humans might have pre-adapted them to some of the nutrients contained in cereals, but not to exorphins, which are unique to cereals.
The differential rates of lactase deficiency, coeliac disease and favism (the inability to metabolize fava beans) among modern racial groups are usually explained as the result of varying genetic adaptation to post-agricultural diets (Simopoulos 1990:27-9), and this could be thought of as implying some adaptation to exorphins as well. We argue that little or no such adaptation has occurred, for two reasons: first, allergy research indicates that these foods still cause abnormal reactions in many people, and that susceptibility is variable within as well as between populations, indicating that differential adaptation is not the only factor involved. Second, the function of the adaptations mentioned is to enable humans to digest those foods, and if they are adaptations, they arose because they conferred a survival advantage. But would susceptibility to the rewarding effects of exorphins lead to lower, or higher, reproductive success? One would expect in general that an animal with a supply of drugs would behave less adaptively and so lower its chances of survival. But our model shows how the widespread exorphin ingestion in humans has led to increased population. And once civilization was the norm, non-susceptibility to exorphins would have meant not fitting in with society. Thus, though there may be adaptation to the nutritional content of cereals, there will be little or none to exorphins. In any case, while contemporary humans may enjoy the benefits of some adaptation to agricultural diets, those who actually made the change ten thousand years ago did not.

Other 'non-nutritional' origins of agriculture models:
We are not the first to suggest a non-nutritional motive for early agriculture. Hayden (1990) argued that early cultigens and trade items had more prestige value than utility, and suggested that agriculture began because the powerful used its products for competitive feasting and accrual of wealth. Braidwood et al. (1953) and later Katz and Voigt (1986) suggested that the incentive for cereal cultivation was the production of alcoholic beer:
“Under what conditions would the consumption of a wild plant resource be sufficiently important to lead to a change in behavior (experiments with cultivation) in order to ensure an adequate supply of this resource? If wild cereals were in fact a minor part of the diet, any argument based on caloric need is weakened. It is our contention that the desire for alcohol would constitute a perceived psychological and social need that might easily prompt changes in subsistence behavior” (Katz & Voigt 1986:33).
This view is clearly compatible with ours. However there may be problems with an alcohol hypothesis: beer may have appeared after bread and other cereal products, and been consumed less widely or less frequently (Braidwood et al. 1953). Unlike alcohol, exorphins are present in all these products. This makes the case for chemical reward as the motive for agriculture much stronger. Opium poppies, too, were an early cultigen (Zohari 1986). Exorphin, alcohol, and opium are primarily rewarding (as opposed to the typically hallucinogenic drugs used by some hunter-gatherers) and it is the artificial reward which is necessary, we claim, for civilization. Perhaps all three were instrumental in causing civilized behavior to emerge.
Cereals have important qualities that differentiate them from most other drugs. They are a food source as well as a drug, and can be stored and transported easily. They are ingested in frequent small doses (not occasional large ones), and do not impede work performance in most people. A desire for the drug, even cravings or withdrawal, can be confused with hunger. These features make cereals the ideal facilitator of civilization (and may also have contributed to the long delay in recognizing their pharmacological properties).
Our hypothesis is not a refutation of existing accounts of the origins of agriculture, but rather fits alongside them, explaining why cereal agriculture was adopted despite its apparent disadvantages and how it led to civilization.
Gaps in our knowledge of exorphins limit the generality and strength of our claims. We do not know whether rice, millet and sorghum, nor grass species which were harvested by African and Australian hunter-gatherers, contain exorphins. We need to be sure that pre-agricultural staples do not contain exorphins in amounts similar to those in cereals. We do not know whether domestication has affected exorphin content or-potency. A test of our hypothesis by correlation of diet and degree of civilization in different populations will require quantitative knowledge of the behavioral effects of all these foods.
We do not comment on the origin of non-cereal agriculture, nor why some groups used a combination of foraging and farming, reverted from farming to foraging, or did not farm at all. Cereal agriculture and civilization have, during the past ten thousand years, become virtually universal. The question, then, is not why they happened here and not there, but why they took longer to become established in some places than in others. At all times and places, chemical reward and the influence of civilizations already using cereals weighed in favor of adopting this lifestyle, the disadvantages of agriculture weighed against it, and factors such as climate, geography, soil quality, and availability of cultigens influenced the outcome. There is a recent trend to multi-causal models of the origins of agriculture (e.g. Redding 1988, Henry 1989), and exorphins can be thought of as simply another factor in the list. Analysis of the relative importance of all the factors involved, at all times and places, is beyond the scope of this paper.
“An animal is a survival machine for the genes that built it. We too are animals, and we too are survival machines for our genes. That is the theory. In practice it makes a lot of sense when we look at wild animals.... It is very different when we look at ourselves. We appear to be a serious exception to the Darwinian law.... It obviously just isn't true that most of us spend our time working energetically for the preservation of our genes” (Dawkins 1989:138).
Many ethologists have acknowledged difficulties in explaining civilized human behavior on evolutionary grounds, in some cases suggesting that modern humans do not always behave adaptively. Yet since agriculture began, the human population has risen by a factor of 1000: Irons (1990) notes that ‘population growth is not the expected effect of maladaptive behavior’.
We have reviewed evidence from several areas of research which shows that cereals and dairy foods have drug-like properties, and shown how these properties may have been the incentive for the initial adoption of agriculture. We suggested further that constant exorphin intake facilitated the behavioral changes and subsequent population growth of civilization, by increasing people's tolerance of (a) living in crowded sedentary conditions, (b) devoting effort to the benefit of non-kin, and (c) playing a subservient role in a vast hierarchical social structure.
Cereals are still staples, and methods of artificial reward have diversified since that time, including today a wide range of pharmacological and non-pharmacological cultural artifacts whose function, ethologically speaking, is to provide reward without adaptive benefit. It seems reasonable then, to suggest that civilization not only arose out of self-administration of artificial reward, but is maintained in this way among contemporary humans. Hence a step towards resolution of the problem of explaining civilized human behavior may be to incorporate into ethological models this widespread distortion of behavior by artificial reward.


Wanting to further explore this idea, I gathered from internet sources:
One model for the evolution of alcohol consumption suggests that ethanol only entered the human diet after people began to store extra food, potentially after the advent of agriculture, and that humans subsequently developed ways to intentionally direct the fermentation of food about 7000BCE. As almost any cereal containing certain sugars can undergo spontaneous fermentation due to wild yeasts in the air, beer-like beverages developed independently throughout the world when people domesticated cereals. Chemical tests of ancient pottery jars reveal beer produced about 5000BCE in today’s Iran. In Mesopotamia, the oldest evidence of beer we have so far is a 6,000-year-old Sumerian tablet depicting people drinking through reed straws from a communal bowl. In China, residue on pottery dating from between 5400 and 4900 years ago shows beer was brewed using barley and other grains.
Discovery of late Stone Age jugs suggest that intentionally fermented beverages existed at least as early as the Neolithic period (c. 10000 BCE). Chemical analysis of jars from the neolithic village Jiahu in the Henan province of northern China revealed traces of alcohol that were absorbed and preserved. According to a study published in the Proceedings of the National Academy of Sciences, chemical analysis of the residue confirmed that a fermented drink made of grapes, hawthorn berries, honey, and rice was being produced in 7000–6650BCE. The results of this analysis were published in December 2004. This is approximately the time when barley beer and grape wine were beginning to be made in the Middle East. The earliest firm evidence of wine production dates back to 6000BCE in Georgia. Medicinal use of alcohol was mentioned in Sumerian and Egyptian texts dating from about 2100BCE. Evidence of alcoholic beverages has also been found dating from 3150BCE in ancient Egypt, 3000BCE in Babylon, 2000BCE in pre-Hispanic Mexico, and 1500BCE in Sudan.
There may have been a single genetic mutation 10 million years ago that endowed humans with an enhanced ability to break down ethanol. Scientists note that the timing of this mutation coincides with a shift to a terrestrial lifestyle. The ability to consume ethanol may have helped human ancestors dine on rotting, fermenting fruit that fell on the forest floor when other food was scarce.
The Mediterranean region contains the earliest archeological evidence of human opium use; the oldest known seeds date back to more than 5000BCE in the Neolithic age with purposes such as food, anesthetics, and ritual. The earliest recorded use of narcotics dates back to 4,000 BCE. Evidence from ancient Greece indicates that opium was consumed in several ways, including inhalation of vapors, suppositories, medical poultices, and as a combination with hemlock for suicide. Indian scholars maintain that ancient verses and the history shown in them were orally transmitted thousands of years before even 4000BCE.
Tea drinking likely began in Yunnan province during the Shang Dynasty (1500–1046 BCE) - legend has the Yellow Emperor, inventor of agriculture and Chinese medicine, drinking a bowl of just boiled water due to a decree that his subjects must boil water before drinking it, in about 2737 BCE, when some tea leaves fell in and he enjoyed the taste. In 1978, Archeologists found tea relics in the Tianluo mountains and estimated them at 7,000 years old. Another promising find in the same mountains was old roots of the Camellia Sinenses plant with broken pottery. These roots were determined to be about 6,000 years old 9from 4000BCE). It wasn’t until the Tang dynasty (618-907), that consumption become widespread.
This goes a bit beyond my ken, and I expect that of most readers too:
Addiction is a disorder of the brain’s reward system which arises through transcriptional and epigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., eating food, the use of cocaine, engagement in sexual intercourse, participation in high-thrill cultural activities such as gambling, etc.). ΔFosB, a gene transcription factor, is a critical component and common factor in the development of virtually all forms of behavioral and drug addictions. Research into ΔFosB’s role in addiction has demonstrated that addiction arises, and the associated compulsive behavior intensifies or attenuates, along with the over-expression of ΔFosB in the D1-type medium spiny neurons of the nucleus accumbens. Due to the causal relationship between ΔFosB expression and addictions, it is used pre-clinically as an addiction biomarker. ΔFosB expression in these neurons directly and positively regulates drug self-administration and reward sensitization through positive reinforcement, while decreasing sensitivity to aversion. I expect that it mostly means that correlation has been found between brain chemistry and the compulsions we call addiction.

A common belief is that psychotropic plant chemicals evolved recurrently throughout evolutionary history. Archaeological records indicate the presence of psychotropic plants and drug use in ancient civilizations as far back as early hominid species about 200 million years ago. Roughly 13,000 years ago, the inhabitants of Timor commonly used betel nut (Areca catechu), as did those in Thailand around 88,700BCE. At the beginning of European colonialism, and perhaps for 40,000 years before that, Australian aborigines used nicotine from two different indigenous sources: pituri plant (Duboisia hopwoodii) and Nicotiana gossel. North and South Americans also used nicotine from their indigenous plants N. tabacum and N. rustica. Ethiopians and northern Africans were documented as having used an ephedrine-analog, khat (Catha edulis), before European colonization. Cocaine (Erythroxylum coca) was taken by Ecuadorians about 3000BCE and by the indigenous people of the western Andes almost 7,000 years ago (5000BCE). The substances were popularly administered through the buccal cavity within the cheek. Nicotine, cocaine, and ephedrine sources were first mixed with an alkali substance, most often wood or lime ash, creating a free base to facilitate diffusion of the drug into the blood stream. Alkali paraphernalia have been found throughout these regions and documented within the archaeological record. Although the buccal method is believed to be most standard method of drug administration, inhabitants of the Americas may have also administered substances nasally, rectally, and by smoking.
Sex addiction as a term first emerged in the mid-1970s when various members of Alcoholics Anonymous sought to apply the principles of 12-steps toward sexual recovery from serial infidelity and other unmanageable compulsive sex behaviors that were similar to the powerlessness and un-manageability they experienced with alcoholism. Certainly compulsive sexual behavior, as with “Jack the Ripper” (!), existed long before 1970.

Labels: , , , , , , , , , , , ,