Mythorelics

Taoist mythology, Lanna history, mythology, the nature of time and other considered ramblings

My Photo
Name:
Location: Chiangrai, Chiangrai, Thailand

Author of many self-published books, including several about Thailand and Chiang Rai, Joel Barlow lived in Bangkok 1964-65, attending 6th grade with the International School of Bangkok's only Thai teacher. He first visited ChiangRai in 1988, and moved there in 1998.

Friday, May 18, 2018

Convincing Others

We’re born carrying certain tendencies, attitudes, concerns and proclivities. What occurred in our progenitors’ lives we only slowly become at all able to criticize. We expect patterns to be repeated, and they mostly are. It’s difficult to recognize, or realize, what can or should be changed, especially as there are so many limits to what we can have any enduring impact upon. Words are at best only superimposed over natural anticipations, and taken no more seriously than they ought to be. Our feelings we take much more seriously.
Attitudes, both epigenetically derived and experientially received, change best not from presentations of logical syllogisms or scientific demonstration, but from interest in safety, self-preservation and the well-being of family, community and polity. It’s not about money, power or influence, but of maintenance of continuity – we will maintain what we can of what made us ourselves.
To correct the thinking of the wrong-headed, one must play to that, or achieve nothing. Preaching, pleading, brow-beating or diagramming don’t touch emotion, and emotion is the basis of attitude. Display what is desired, they how it can be lost, and what might be retained, and attitude can be influenced. Getting past blinders is difficult in the best of circumstances; too bad that we’ve clearly little chance of retaining ANYTHING through two more generations (for almost all, anyway).

Thursday, April 26, 2018

Beautiful, blessed floating Isles of Eldorado

Instead of creating more Frankenstein monsters and further imperiling our progeny, couldn’t we revive (and maybe refine) viable ancient methods to sustain ourselves? Nuclear energy, GMOs, plastics and lots of other innovations, inventions and developments since the advent of “civilization” don’t just involve unintended consequences, but seem to extend the addictive dependencies (including even grain and cheese) that started with agriculture. Only a very few of us can live as hunter-gatherers, but alternatives to our modern proclivities do exist.
If you ponder humanity’s prospects for the next 50 years, things look bleak. Climate change is shrinking our coastlines. Unsustainable farm practices are depleting our soil. Billions of new mouths will need to be fed. In a world of swelling populations and dwindling farmland, some predict we’re running out of places to go — and to grow food. What’s to be done?
A New York design firm has been growing plants on man-made islands near Manhattan and Philadelphia. On the western coast of Vancouver, 14 floating greenhouses and a two-story house are tethered together on re-purposed fish floats. In Thailand and the waterlogged Netherlands, movements are underway to construct floating homes, greenhouses, hospitals and prisons.
When Europeans began settling the Americas in the 15 and 1600s, they failed to recognize most local gardens, due to lack of mono-culture. Beans grew on corn stalks with melons beneath, and smelly Christians saw only jungle. Now our myopic, egotistic self-indulgence is leading to rising seas, expanding deserts and insufficient space. But “primitives” dealt well with space limitation. Remember, rain-forest had multiple levels with ecosystems above and below each other! Sky-scrapers are not the only way to gain new space.
Despite how it’s claimed that real estate is a good investment as they aren’t making any more of it, it appears that new agricultural spaces may be on the horizon. I don’t know that global warming will provide valuable new property at the earth’s poles, but what about those huge floating garbage-patches? Cover ‘em with hemp-fiber and grow mushrooms!

Lake Kisale, on the border of Congo and Zambia, is famous for floating islands inhabited by fishermen. John Geddie described them: “The matted growths of aquatic plants fringing its shores are cut off in sections, and towed to the center of the lake. Logs, brushwood, and earth are laid on the floating platform, until it acquires a consistency capable of supporting a native hut and a plot of bananas and other fruit trees, with a small flock of goats and poultry. The island is anchored by a stake driven into the bed of the lake, and if the fishing becomes scarce, or should other occasion occur for shifting the domicile, the proprietor simply draws the peg, and shifts the floating little mansion, farm and stick, whether he chooses.”
At Inle Lake in northern Burma there’s something similar. Around a quarter of the massive freshwater lake—the country’s second largest body of water—is topped with these manmade gardens. Farmers glide between their plots from atop boats often propelled by leg-powered oars. Produce (tomatoes, string beans, cucumbers, flowers, eggplants and gourds) is plucked from patches that rise and fall with the currents. Creating the tiny islands is no easy task. Farmers gather clumps of water hyacinth and sea-grass, secure them in place with large bamboo poles, which they then stake into the lake’s muddy bottom. They then heap even more layers of sea-grass and silt atop the mounds before planting seeds.
Traditional bamboo houses of Inle Lake are built on stilts with walls made by weaving strips of bamboo. This provide shade while the light structure and small wall-gaps allow passage to light and air. Since the house is raised on stilts, air can flow on all sides. Roof overhangs shade the walls and protect them from rain, and the structure dries quickly despite the humid environment.
Building atop a lake can provide additional benefits. When water evaporates, it cools down the air around it. Water has a high thermal capacity which means that water temperature varies less during the day than air temperature. It keeps houses and villages cooler during the day and warmer during the night, creating more stable temperatures than what can be achieved on land. On the lake there are fewer obstructions, so wind speeds are higher, providing even more cooling air flow.
Some newer houses are built with wood and have two stories. These houses look more durable and may have a higher status for some locals. But as the walls are more solid, less air passes through and this makes the houses less comfortable. The stilts of the houses rot and have to be replaced approximately every 15 years, and using a fast-growing material like bamboo makes replacing the stilts and the houses more sustainable.
While the houses don’t actually float, the gardens do and this makes them even more flood resistant. As the water level drops and rises, the man-made islands move with the water level. Growing food on the lake also increases the total land available for agriculture, and there is easy access to water for irrigation, even during the dry season.
The practice of farming atop the lake, rather than around it, is thought to have started in the 19th century before intensifying in the 1960s. Though the unusual agriculture has boosted the region’s economy, people have since started to worry that chemical fertilizers, pesticides, and runoff are destroying the lake’s natural ecosystem. Pesticide and fertilizer runoff from floating farms has led to many traditional fish species disappearing, and much less fishing.

When the Spanish arrived in 1519, they saw a vast network of “chinampas” (gardens protruding into or within lakes or ponds) in the shallow lake beds of the Mexico Valley, built from water hyacinth reeds topped with soil. The Aztec capital, Tenochtitlan, was surrounded withthese small artificial floating gardens for agriculture. These vast systems were self-watering, self-fertilizing, navigated easily by boats, and highly productive, demonstrating a perfect example of a multi-functioning, self-supporting, and life-regenerating system.
Other, more truly floating garden systems are found worldwide. In South East Asia they may have roots dating far before the Aztec Empire. Large scale floating gardens have been made with aquaponics systems in China, for growing rice, wheat and canna lily, with some installations exceeding 10,000 m2 (2.5 acres). Floating gardens are also found in Vietnam, Bangladesh, and more famously, in the famous floating gardens of Inle Lake in Myanmar. Floating artificial islands are generally made of bundled reeds, and the best known examples are those of the Uros people of Lake Titicaca, Peru, who build their villages upon what are in effect huge rafts of bundled totora reeds. The Uros originally created their islands to prevent attacks by their more aggressive neighbors, the Incas and Collas.
Spiral Island was a more modern one-person effort to build an artificial floating island, on the Caribbean coast of Mexico. Modern artificial islands mimicking the floating reed-beds of the Uros are increasingly used by local governments and catchment managers to improve water quality at source, reducing pollutants in surface water bodies and providing biodiversity habitat. Examples include Gold Coast City Council in Australia. Artificial floating reed-beds are commonly anchored to the shoreline or bottom of water body, to ensure the system does not float away in a storm event or create a hazard. A permaculture principle tells us to “Use edge and value the marginal.” The edges are where there’s the most biodiversity and productivity, as in an estuary, where the river meets the ocean, or at the edge of a forest and prairie. There can be canopy trees, understory trees and shrubs, land crops, edge crops, emergent crops, trellis crops, fish, crayfish, water fowl, chickens on the land base, and floating water plants for fertility and mulch production… There’s immense possibility in cultivating these systems: trellises hooping over the waterways with fruits hanging down which can be harvested by boat, or ways of easily draining the waterways to harvest fish/crayfish, and flooding the land base to bring nutrient rich water to the plants. There are extensive possibilities.
These islands can also act as floating treatment wetlands (FTW). Several plants can play a part of cleaning water by absorbing dissolved nutrients, such as excess nitrates and oxygen, thereby reducing the content of these chemicals. The FTW, based on the soil-less hydroponics technique, comprises four layers. Floatable bamboo forms its base over which styrofoam cubicles are placed. The third layer is composed of gunnybags. Gravel forms the final layer. Cleaning agents planted on the FTW are vetivers, canna, cattalis, bulrush, citronella, hibiscus, fountain grass, flowering herbs, tulsi and ashvagandha. The root systems filter out sediments and pollutants.

Floating islands are not all good. In the Brazilian Amazon, floating islands known as Matupá form naturally in lakes on the floodplains of white-water rivers, ranging in size from a few feet across (or a few square meters) to a few hectares and even hundreds of acres. Sometimes they're just drifting masses of peat, mud, and plants. In extreme cases, these “islands” contain trees over 50 feet tall and 8-12 inches in diameter. These occur in Argentina, Australia, Finland, India, Japan, Kenya, and Papua New Guinea. Aquatic plant managers call them tussocks, floating islands or floating forests. When freely drifting in Florida waterways, they can be extremely expensive in terms of property damage, lost income, and management costs. Tussocks and floating islands are a product of the natural aging process of water bodies and probably have always been a part of Florida’s shallow lakes. Historically, their occurrence was kept in check by periodic drought and fire that kept them within lake margins, or occasional floods that deposited them in uplands or downstream marshes. Nowadays, water levels in most of Florida’s public lakes are manipulated by weirs, dams, or levees, which eliminate extreme high and low water events that historically suppressed tussock and floating island formation.
If not managed, floating invasive plants like water hyacinth and water lettuce, can form large rafts that act as a substrate for emergent plants to colonize. Emergent plants like primrose willow tie the rafts together below the surface with their roots, and above the surface with stems and branches. Native emergent species such as pennywort and smartweed grow from the shoreline to form mats across the water surface. If the mats become large enough, wind and wave action can tear them loose, generating floating tussocks. The non-native Cuban bulrush (Oxycaryum cubense), a grass-like sedge, sends long runners among and over other emergent plants and can eventually displace them with a floating tussock of Cuban bulrush. As water levels increase after draw-downs or droughts, masses of spongy plants like cattail and pickerelweed can pull loose from shallow soft mud flats.
Floating islands can form in the same manner as tussocks. They’re comprised of aquatic and sometimes upland plants, and also herbaceous and woody plants. Most important, they’re characterized by suspended masses of organic deposits like peat and mud that vary from a few inches to a few feet thick. In some cases, the sediments are compact or fibrous enough that the emergent plants, whose roots are interwoven into the sediments, pull as much as several feet of organic material with them to the surface as lakes re-fill. Simply killing the vegetation on these floating islands doesn’t eliminate them. The mud, peat, and woody material continue to float and the cycle repeats; often they must be dismantled to be controlled.

Nutrient pollution is a growing problem along the Upper Mississippi, where water rich in nitrogen and phosphates from crop fertilizer flows directly into the river without the benefit of wetland filtration. The problem is particularly acute in the levee region of southern Iowa, where farmers are groping for a remedy. The polluted water eventually reaches the Gulf of Mexico, creating a dead zone that now spans 6,700 square miles and costs fisheries $2.8 billion per year.
Environmentalists have filed lawsuits against the Environmental Protection Agency to press for tighter standards for nitrogen and phosphorus runoff. Worried that the agency might step in with new mandates, farm groups are weighing a temporary solution: floating islands that could process the nutrients before they reach the river.
The islands are made from a nonwoven mat of filter material constructed from the recycled plastic bottles, with plant roots growing through the bottom of the mat, adding microbes that will eventually yield clean water and provide food for fish. They mimic the role that wetlands once played in assimilating sediment from local agriculture. Charles Theiling, a hydrological specialist for the Army Corps of Engineers in Davenport, Iowa. has met with 500 farmers who favor using a concentrated wetland-effect, or floating islands, for abating the runoff problem. The pollution is rooted in the disappearance of interconnected flood plains that once existed in this Upper Mississippi region, Mr. Theiling explained. “The flood plains acted like a kidney to the river — they filtered out the pollutants.” But where flood plains once existed, two million acres of rich agricultural land are now planted with corn in each growing season. Swamps have been drained and trees cut. Meanwhile, 110 individual levee districts with pumps and plumbing have been installed to limit flood risks on these lands, which means that excess rainfall is channeled directly into the levees. “Our river wetlands are degraded because of conditions of our watershed and the ones we created by eliminating the flood plain,” Theiling said. “We are not getting the natural ecosystem service benefits of nutrient processing and sediment assimilation that we would get if this land were in its natural state.”
Fertilizer doesn’t just promote growth in crops, but even for algae and microbes. Anything living will respond to fertilizer, most certainly aquatic plants and algae blooms that choke-off beneficial micro-organisms, limiting food sources for fish and other marine life. Floating Islands can support biological processes that feed on fertilizer in beneficial ways, cleaning the water, acting as a sort of bio-reactor, providing something for microscopic life forms to live on, drawing on the carbon in the water, while using up the phosphorous and nitrogen gas.
While the islands have not yet been used widely in the Upper Mississippi, over 4,800 have been installed around the world. Aside from the United States Army Corps of Engineers, public and private entities ranging from a wastewater facility at a Louisiana prison to a landfill in New Zealand have commissioned such islands to clean polluted water, provide nutrients for fish and contribute to species habitat.

Meanwhile, the natural world is rapidly becoming a giant pile of plastic waste. The ocean is full of plastic, with floating, continent-size patches of it in the Pacific and Atlantic Oceans, plus newly formed ones in the Arctic. Some uninhabited islands are drowning in the stuff. There are at least six huge garbage patches in our oceans; we’re slowly suffocating natural ecologies with our trash. Fish, birds, and other animals consume bits of the five trillion tons of plastic strewn about the ocean. Doing so can kill them. Weirdly, though, scientists have come to the conclusion that, based on the amount of plastic we make every year, there is only about one-hundredth as much of the plastic floating around as the numbers would suggest. Although there are many possible explanations for this, it may be that microbes are breaking the plastic down. A team of Japanese scientists found a species of bacteria that eats the type of plastic found in most disposable water bottles. The discovery could lead to new methods to manage the more than 50 million tons of this particular type of plastic produced globally each year.
Plastic for water bottles is known as polyethylene terephthalate, or PET. It’s found in polyester clothing, frozen-dinner trays and blister packaging; if you walk down the aisle in Wal-Mart you see lots of it. Part of the appeal of PET is that it is lightweight, colorless and strong. However, it’s also notoriously resistant to being broken down by microbes. Studies had found a few species of fungi which grow on PET, but until just recently, no microbes that can eat it.
In 2016, scientists from Japan tested different bacteria from a bottle recycling plant and found one they named Ideonella sakaiensis that could digest the plastic used to make single-use drinks bottles. In their testing, they inadvertently made an enzyme, PETase, even better at degrading PET. The resulting mutant PETase takes just a few days to break down PET, compared to the 450 years it takes for the stuff to degrade naturally. It works by secreting an enzyme, PETase, which splits certain chemical bonds (esters) in PET, leaving smaller molecules that the bacteria can absorb, using the carbon in them as a food source.
Although other bacterial enzymes were already known to slowly digest PET, the new enzyme had apparently evolved specifically for this job. This suggests it might be faster and more efficient and so have the potential for use in bio-recycling.
One internet article confusingly claims that there are plenty of cheaper and easier ways to break down and recycle PET. It claims that PET is in fact, one of the easier types of plastic to break down. So, industrial scale production of enzymes or genetically-modified bacteria isn’t necessary. But although bacteria that can eat oil have also been discovered, oil slicks also remain a huge problem, especially as the incidence of them isn’t decreasing either, no more than is the addition of methane and carbon dioxide to our atmosphere, despite that we know that this will make our climate over-like within this century.
PETase could be used to break down bottles before they end up in the environment, much as we could stop riding in cars and airplanes, but convenience continues to rule. “Current recycling strategies for PET bottles mostly focus on mechanical recycling, so they chop the bottles up and use them for applications that typically do not need the same materials requirements as bottles,” says study co-author Gregg Beckham, a researcher at the U.S. Department of Energy’s National Renewable Energy Laboratory. “Engineered enzymes that break PET down to its building blocks would enable the ability to do full bottle-to-bottle recycling,” which might help decrease oil drilling demands for new plastic production. Waxworm caterpillars have been found to break down plastic in a matter of hours, and mealworms possess gut microbes that eat through polystyrene. Beckham says, given how ubiquitous environmental pollution has become, “it is likely that microbes are evolving faster and better strategies to break down man-made plastics. It seems that nature is evolving solutions.” Seems kind of deus ex machina – we can’t help ourselves, so something else has to.
Although a growing number of plastic-consuming microbes will help limit the absolutely disgraceful amounts of plastic dumped, much is consumed by animals that get eaten by us. Our garbage hardly assists biodiversity, and only the morally repugnant think we can keep dumping plastic in the oceans without consequence, but most of our major decision-makers do appear to be morally repugnant.
Still, if these bacteria can be encouraged to proliferate across the ocean, it might reduce humanity’s negative impact on them. But some dismiss the idea of adding either the original bacteria or the genetically enhanced version to ocean environments to speed the degradation of plastic debris, calling it irresponsible, as there would be too many side effects for the ecosystem.

A machine to clean up the planet’s largest chunk of ocean plastic is scheduled to quite soon finally set sail. It’ll work on the Great Pacific Garbage Patch, halfway between California and Hawaii, collecting the 1.8 trillion pieces of plastic rubbish amassed there by ocean currents. The system uses a combination of huge floating nets (dubbed “screens”) held in place by giant tubes, ironically made out of plastic, to suck stubborn waste out of the water. It’ll transfer debris to large ships that will take it to shore for recycling. This intricate system is expected to start work by July, 2018. Ultimately, Ocean Cleanup (a Dutch non-profit behind the project) aims to install 60 giant floating scoops, each stretching a mile from end to end. Fish will be able to escape the screens by passing underneath them, while boats will collect the waste every six to eight weeks.
The ambitious system is the brainchild of Dutch teen prodigy Boyan Slat, who presented his ocean-cleaning machine at a Tedx talk six years ago. Despite skepticism from some scientists, Slat dropped out of college to pursue the venture, raising $2.2 million from a crowd-funding campaign, with millions more brought in by other investors.
The Great Pacific Garbage Patch (GPGP) spans 617,763 square miles; it’s larger than France, Germany and Spain combined and contains at least 79,000 tons of plastic. Most of it is “ghost gear”: parts of abandoned or lost fishing gear, including nets and ropes, often from illegal fishing boats. Ghost gear kills more than 100,000 whales, dolphins and seals each year, with many of the sea creatures drowned, strangled or mutilated by the plastic.

If instead of cleaning up that mess we’ve made, we try to utilize it as farmland, a new desalinization process, shock electrodialysis, would help. In it, water flows through a porous material made of tiny glass particles (called a frit), with membranes or electrodes sandwiching the porous material on each side. When an electric current flows through the system, the salty water divides into regions where the salt concentration is either depleted or enriched. When the current is increased to a certain point, it generates a shockwave between the two zones, sharply dividing the streams and allowing the fresh and salty regions to be separated by a simple physical barrier at the center of the flow. Water flows across the membranes, not through them, which means they’re not as vulnerable to fouling (buildup of filtered material) or to degradation due to water pressure, as in regular membrane-based desalination, including conventional electrodialysis, which has the potential to desalinate seawater quickly and cheaply - but doesn’t remove dirt and bacteria. Shock electrodialysis, however, removes both salt and particulate matter including bacteria.
Forward osmosis pulls water molecules across a membrane, leaving salt and impurities behind, using less than a quarter of the electricity needed for standard desalination, making it easier for the technology to run on renewable power.
Large parabolic mirrors can be used to collect and concentrate the sun’s energy. Inside this solar still, pure water evaporates, while solids remain behind. The system is currently being tested by a water district in California’s agricultural Central Valley, cleaning irrigation runoff tainted with salts leached from the soil.
Another new technology takes water from the air, but how the completely dry air left from this process affects the rest of the atmosphere, and weather patterns, is not understood.

Researchers in India have come up with a water purification system using nanotechnology to remove microbes, bacteria and other matter from water, by using composite nanoparticles, which emit silver ions that destroy contaminants. Graphene-oxide membranes have attracted considerable attention as promising candidates for new filtration technologies. Now the much sought-after development of making membranes capable of sieving common salts has been achieved. Possibly, graphene-oxide membrane systems can be built on smaller scales making this technology accessible to countries which do not have the financial infrastructure to fund large plants without compromising the yield of fresh water produced.
The Solvay process, a 150-year-old, seven-step chemical conversion method that is widely used to produce sodium carbonate for industrial applications, and that many chemists are working to refine, has been simplified by aiming for sodium bicarbonate (baking soda) rather than sodium carbonate, thus reducing the needed chemical conversion steps to just two. The presence of ammonia causes pure carbon dioxide to react with waste brine from desalination, and create a solid baking soda and ammonium chloride solution. The second step creates an ammonium chloride solution and calcium oxide reaction, producing calcium chloride solution and ammonia gas. Recovering the ammonia allows it to be reused in the first step, reducing the cost of the process.

Even were garbage patches to be surrounded by huge nylon nets bigger than nets used for industrial fishing, microscopic bits of plastic would still contaminate sea-life, much as happens with expensive bottled water so popular of recent. Well, hot-dogs, sausages and noodles are often coated in plastic, and many of us eat that, and sea-life is getting irradiated anyway. So maybe floating monoculture as would please Global AgraBiz is a possibility. Strides in wind and solar power, battery miniaturization and longevity, desalinization and perhaps other technologies might help. Should we be able to expand food production and even living space on the high seas, ocean death might not be seen as the huge problem it actually is, but this shouldn’t be. We can no more do without sea-life than without insects of large predators, regardless of how difficult it is to convince many of these truths. Dead oceans wouldn’t produce oxygen, but rather other gasses, and resultant climate change is difficult to even imagine. In a dead ocean, who knows where a floating island might end up? Currants and winds would be completely different.
Imagination is of the essence here. Either we find ways of combining ancient wisdom with modern discoveries, or society as we have come to know it completely disappears. In all likelihood, within two decades we’re going to be completely dependent on substances that, in their infinite wisdom, our businessmen and politicos have made illegal. For the sake of all, let’s hope we can make it that far without deus ex machina (count on God or the Tao helping only those who help themselves).

A few days after putting this together, it occurred to me that floating doesn't need to involve liquid - maybe polar methane freed up by our release of too much CO2 could be put in huge balloons to support flying islands! Maybe those who'd live on them wouldn't care how taking water from nearby air would affect the rest of the atmosphere...

Labels: , , , , , , , , , , ,

Friday, April 06, 2018

dormant elaborate labyrinths

Back in midevil times when I was young, in my readings (ETA Hoffman, Hesse, Borges, John Gardiner’s “The Wreckage of Agathon”, Asimov’s Foundation Trilogy, Ursula LeGuinn) I not infrequently encountered reference to elaborate labyrinths navigation through which could lead to beneficial greater understandings. I quite liked that myth, but it seems to have, at best, gone dormant.
I’d hoped that through the internet I’d find, if not greater understanding, at least interesting discussion of ideas, or dreams, aspirations, with comparisons of various ways of looking at things or apprehending them. But, sadly, no. It’s mostly just flat assertions.
We have lost so much, for ego.
Like money, at best a questionable God (although, like other gods, it doesn’t answer questions).

Thursday, April 05, 2018

Patent law and the glorification of the individual

Patent law is based on the entirely incorrect notion that an inspired individual, through hard work, can produce something of almost incalculable value which contributes greatly to the ease and utility others can enjoy. The easiest justification for this is through art: painters and musicians, especially, have been seen as of individual genius. I’m not going to argue against Beethoven and Mozart, but it seems to me that we have taken the idea of the importance of the individual far too far. Individual attribution ignores the concept of “standing on the shoulders of giants” and the standing of the individual within society, without which the individual is nothing.
But let’s look at a few of the purportedly greatest minds ever, and what they gave us. His laws of motion gave us modern physics, and his work with prisms and reflecting telescopes helped expand understanding of our world. But he was unable to deal with criticism, even to the point of being unable to tolerate open discussion of his ideas. We know now that his revolutionary, and beneficial, ideas on the mechanics of our world weren’t quite correct. Assertion that his ideas on color weren’t entirely correct drove him to complete nervous breakdown. Some of his ideas were quite mad, and his violent and vindictive attacks against both friend and foe reveal a deeply unhappy man. His set of four rules for scientific reasoning, that (1) we are to admit no more causes of natural things such as are both true and sufficient to explain their appearances, (2) the same natural effects must be assigned to the same causes, (3) qualities of bodies are to be esteemed as universal, and (4) propositions deduced from observation of phenomena should be viewed as accurate until other phenomena contradict them, remain profound and valuable, but are we to imagine that no-one else would have provided them soon after he did, had he not?
William Shakespeare is regarded as the world's pre-eminent dramatist, but was hardly revered during his lifetime. But it’s claimed both that he stole plots, poems, and even entire stories from other people, and that the real author of the works attributed to him was Edward de Vere (maybe with “a little clique of disappointed and defeated politicians” or substantial contributions from other royalty than de Vere, or even from members of the original acting troupe that presented the plays). Dennis McCarthy and June Schlueter recently used off-the-shelf plagiarism software to make a case that Shakespeare (or de Vere) stole from George North, a barely known writer/soldier, for almost a dozen of ‘his’ works, including “King Lear,” “Macbeth” and “Coriolanus.”
Albert Einstein is also accused of using the ideas of other people, being wrong about relativity and as mistaken about gravity as Newton. Much as with Darwin, it’s clear that had he not presented what he did when he did, someone else would soon have anyway. Somehow a need for individual heroes arose to replace more generic ones like Coyote of many Native American tribes. Most of the world has always been too wise to push individualism toward the absurdities central in our current ‘dominant narrative’ – a world-view which cannot last. Ownership, as it were, is theft. One shares in, and must share with. No man is an island, nor a prime mover. An act of invention hardly should release a person from obligations, the way too much money has come to do.
Sure, people should profit from hard work, from inventiveness, even from good inspirations that require little in the way of sweat or exertion. But how much, and for how long? Is resting on laurels really a noble occupation?

Labels: , , ,

Sunday, April 01, 2018

(some part of) Why I live with Yanamamo on the Upper Sepik

(with a nod and a wink to Eudora Welty for her wonderful "Why I live at the P.O." - this, other than the somewhat metaphoric title, is all true tho)

One day, living in Bangkok, I helped a guy apply a cement plaster to gaps and holes left when a huge Ganesh statue was cast. We also applied tiny square tiles fronted with gold leaf to the dais edging below the idol. I kept a few of those tiles for almost 30 years.
My father was organizing Thailand’s first department of psychology. Thais find most “psychology” ridiculous, but considered my father’s behaviorism at least scientific. For school science day, I presented a pigeon in a box with a small window in which colored shapes could be made to appear. According to what it saw, the pigeon would turn around clockwise or counter-clockwise, or peck.
We returned via Europe and New York, where I spent several days at the World’s Fair.
Suddenly a world full of discovery was replaced by Indiana suburbia where we had a “split-level” house with white-bread yard in a “sub-division” created from corn fields, with a small woods off to one side. One neighboring house with a flat roof we joked was a helicopter pad, was supposedly designed by Frank Lloyd Wright or someone almost as famous, but it was as dull as all the others. One home-owner sprayed green paint on his grass late at night, when he thought no-one would notice.
Immediately I started school, 7th grade. In gym class, we were lined up in “squads” – a skinny but tall guy who stood in front of me told a short squat guy behind me that he and I were going to have a fight after school, and asked him to be his “second” before telling me to find one too. I simply didn’t show, and never saw the taller guy again, as it was deemed necessary to put us on half days, with high-schoolers whose building hadn’t been completed on schedule used our junior high the other half day. No time for gym. Well OK. The shorter guy I somehow recognized over 20 years later – he was a neighbor in Arizona desert a couple miles from the Rez line. We managed to be friends for over 25 years.
One thing that helped my ability for that friendship was the kid a year older than me who lived in the house behind ours. He had a dog. I got one too, same size, medium. Same short hair, his black with some white, mine all light brown. We had little enough else in common, but few alternative friendships as the other early adolescents in our neighborhood attended ‘parochial’ schools.
My dog got named “Vicky” and she’d bounce and bound with glee as the school bus arrived to drop me off. I’d get my bike and we’d roam the flatland, nowhere particular to go. When rich enough, I’d buy a box of dried apricots for a dollar, to eat while I read, which is mostly what I did. Vicky, seeing my enjoyment, developed longing to participate in the fun and became a huge apricot fan too.
But the wind-up bird or clockwork orange of Skinnerian behaviorism proved anathema to local psychology. We are not stimulus-response organisms of no soul! It was seen as a perversion of reality to even suggest such a thing. Not that my father meant to, not at all. He was a devout Quaker, or Friend, as we called ourselves. He just wanted a scientific approach to teaching, learning and behavior modification for occasionally essential readjustment programs for the maladjusted. That any of his thoughts might be “subversive” he simply compartmentalized. That mark Twain, Will Rogers, Woody Guthrie, Paul Robeson, Humphrey Bogart or Adlai Stevenson could be in any way subversive was meaningless to him, son of a WWI hero who’d become head of the VA for Mexico despite courts-martial for disobeying a direct order from his superior officer to stand fast (a charge also leveled against me, later, over something much more trivial than the “less than half-a-loaf” my grandmother described her husband’s military trial as). As the last officer left alive on his side of a battlefield, he’d called a retreat, and saved many men’s lives. Of course, people have been shot for far less…
My father’d gotten only a one-year contract. It wasn’t renewed. Despite having no decent orchestra to play her harp in, or much in the way of students, my mother wasn’t at all pleased. In 15 years of marriage they’d already moved 8 times. It was no way to live. Her eldest had been packed off to a Quaker boarding school twice already, and packing’s no-one’s favorite hobby. There were unresolved undercurrents of hostility throughout the house that had never become a real home.
Summer heat had come early, the future was uncertain, and despite the Quaker “peace testimony” most local Quakers had proven war-hawks when it came to the area of Asia we’d recently left (with absolutely no feelings of hostility, quite opposite to what we encountered at a Quaker church, where I witnessed folk getting up on their knees backward on pews to confess aloud their sins to all the congregation. This wasn’t what we were used to!). Also, money was tight. Feelings were fraught, frayed, taught and strained.
In the kitchen, a piece of chicken fell to the floor. Vicky grabbed it. My father freaked, called for, demanded, rather, a just purchased, still refrigerated steak. Vicky, loathe to relinquish her bounty, had already growled. My mother scowled. Steak was expensive!
“Chicken bones will stick in the dog’s throat!” my father yelled. He got the steak. Vicky didn’t care. Mom glowered. Dad didn’t know what to do, how to affect a trade. I’d little idea how to help.
My memory becomes indistinct at this point. All could have just gone on as normal, but the dog had growled at my Dad. Next day she was off to the vet, and never came back. “Distemper” Dad said. I was made to burn all her things, including my old, cherished comport blanket she’d been sleeping on, a blanket I’d often longed for on many an air-conditioned Bangkok hot-season night with only a sheet.
Then I was packed off to Mexico, and got kind of lost on the way, after the train broke down and a bus ride to El Paso was offered. At El Paso bus station I had no idea what to do, nor even money to take me anywhere. Called home but they were all out to a movie. Called directory assistance in Mexico but didn’t know enough to reach anyone. The phone operator came to get me and I slept at her house. Later got a dollar for Ma Bell making an ad with the story. On the train in the Mexican desert, between episodes of Archie, Jughead, Betty and Mr. Lodge a girl my age was kind enough to share with me, I stood at the back of the train. The conductor had placed eight silver pesos on the back railing, was teasing a couple of young kids with them. Somehow the two boys were suddenly off the train, running behind, trying to catch up. They never did. Those pesos were an ounce of almost pure silver, larger than a silver dollar but traded at eight cents. USA coins were already made of lesser metals.
Our world doesn’t make sense. I’ve several dogs now and they eat chicken bones most days.
Instead of learning about animal behavior from Konrad Lorenz (who’s “King Solomon’s Ring” I quite enjoyed) or Pavlov, we’d have done better to ask farmers and Bushmen. Instead of trying to boss the world about, we could have tried to lead by presenting a better example. Instead of trying to teach or trade, we should have tried to learn and do. It might have paid, in better than silver coin. But most likely not in ways that would have provided steak dinners in split-level house sub-divisions.
Even my desert-rat from Indiana Bible-thumping without knowing what’s in it ex-friend knows that. Though he finds no use for said information. The next year was 1967 and Dorothy really wasn’t in Kansas anymore. The genie was out of the bottle, Pandora’s box was open and someone left the cake out in the rain. Oh no.

Friday, March 30, 2018

Has agriculture promoted addictive behavior?

It is generally accepted that the history of agriculture began more than 10,000 years ago.
Major forms of agriculture arose independently in at least seven different areas of the world. Each depended on a different mix of plants and non-human animals. Moreover, there are different mixes of horticulture, or small-scale gardening, with other activities, like gathering and hunting.
According to long-standing archaeological theory, there are eight plants that are considered the domesticate “founder crops” in the story of the origins of agriculture on our planet. All eight arose in the Fertile Crescent region (what is today southern Syria, Jordan, Israel, Palestine, Turkey and the Zagros foothills in Iran) during the Pre-Pottery Neolithic period some 11,000-10,000 years ago. The eight include three cereals (einkorn wheat, emmer wheat, and barley); four legumes (lentil, pea, chickpea and bitter vetch); and one oil and fiber crop (flax or linseed). Figs might be a ninth.
The origins of agriculture weren’t just in the western and northern Fertile Crescent (along the Euphrates), but also in the foothills of the Zagros Mountains of Iran (in the eastern Fertile Crescent). At the site of Chogha Golan there, charred plant remains (including wild barley, goat-grass and lentil) dating from 11,700 to 9,800 years ago were excavated. 9.800 years ago, domesticated emmer wheat appeared. Evidence has also been found in North Africa, Israel and Syria, of pea, chick pea, bitter vetch, and flax from 10,000+ BCE.
By 6000 BCE agriculture was developed independently in the Far East, with rice as the primary crop.
The Kebaran culture, the eastern Mediterranean area (c. 16,000 to 10,500 BCE), named after its type site, Kebara Cave south of Haifa, is associated with the use of the bow and arrow and the domestication of the dog. The Kebaran is characterized by the earliest collecting of wild cereals, known due to the uncovering of grain grinding tools. The Natufian culture which followed existed from around 12,500 to 9,500 BCE, with a sedentary or semi-sedentary population (they were the original builders of Jericho, the world’s first city) even before the introduction of agriculture. Some evidence suggests deliberate cultivation of cereals, specifically rye, by the Natufian culture, at Tell Abu Hureyra, the site of earliest evidence of agriculture in the world. Generally, though, Natufians exploited wild cereals. Animals hunted included gazelles.
Before the Natufian roaming hunter gatherers, known as the Kebaran culture, trudged the Mediterranean coast as their ancestors had for thousands upon thousands of years. What the Natufians appear to have done, and were possibly the first to have done, is to say “Enough! I’m stopping here thank you,” and build villages.

These villages are very small by modern standards, no more than forty meters across and with populations of less than a couple of hundred people. There was almost nothing like them before. The houses are crude, more like shacks, but they do show a surprising degree of care in their organization and maintenance. Also, whilst there’s no pottery there are stone bowls and grinding stones.
Perhaps most importantly these seem to have been all year round settlements. The teeth of hunted animals such as gazelle show that both summer and winter kills being brought back to the villages. Also, there’s plenty of evidence for house mice and rats in numbers appropriate to a village occupied all year.

Connections with the Neolithic:
What intrigues archaeologists most about the Natufian is that, just under three thousand years later, the Neolithic, with the first evidence in the world of agriculture, started in pretty much the same place. It’s difficult not to see the two events as connected, with one leading to the other. The question that archaeologists have asked since the discovery of these earliest villages is “Why then? Why there?” A number of factors have been suggested.
The start of the Natufian culture, 12,500 BCE, also marks the end of the last glaciation. The world’s temperatures rapidly increased at this time by several degrees. This seems to have increased rainfall in the Levant, extending woodland further inland, although at the same time a rise in sea level might have countered this effect a bit. The improvement in climate did, however, reverse around 11000 BCE, when the “Younger Dryas” event caused a return to cold weather for the next thousand or so years. It was only after the weather warmed up again that agriculture started.
Agriculture changed society in many ways: primarily through division of labor but also through limiting mobility and the variety it could bring. Exchange become more important, considered and unavoidable. Storage hadn’t been much of an issue before, now it was, as was protection of what was stored. There’s been discussion of resultant social and sexual inequality, despotism, militancy, increased disease incidence and declining health.

Proposed Centers of Origin of Some Crops:
1. Near East (Fertile Crescent)- wheat and barley, flax, lentils, chickpea, figs, dates, grapes, olives, lettuce, onions, cabbage, carrots, cucumbers, and melons; fruits and nuts.
2. Africa- Pearl millet, Guinea millet, African rice, sorghum, cowpea, Bambara groundnut, yam, oil palm, watermelon, okra.
3. China- Japanese millet, rice, buckwheat, and soybean (about 7000 BCE).
4. South-east Asia- wet- and dryland rice, pigeon pea, mung bean, citrus fruits, coconut, taro, yams, banana, breadfruit, coconut, sugarcane.
5. Mesoamerica and North America- maize, squash, common bean, lima bean, peppers, amaranth, sweet potato, sunflower.
6. South America- lowlands: cassava; mid-altitudes and uplands (Peru): potato, peanut, cotton, maize.


Adapted from “The origins of agriculture: a biological perspective and a new hypothesis” by Greg Wadley and Angus Martin, published in Australian Biologist 6: 96-105, June 1993

There’s never been agreement on the nature and significance of the rise of civilization, yet the questions posed by the problem are fundamental. How did civilization come about? What animus impelled man to forego the independence, intimacies, and invariability of tribal existence for the much larger and more impersonal political complexity we call the state? What forces fused to initiate the mutation that slowly transformed nomadic societies into populous cities with ethnic mixtures, stratified societies, diversified economies and unique cultural forms? Was the advent of civilization the inevitable result of social evolution and natural laws of progress or was man the designer of his own destiny? Have technological innovations been the motivating force or was it some intangible factor such as religion or intellectual advancement?

To a very good approximation, every civilization that came into being had cereal agriculture as its subsistence base, and wherever cereals were cultivated, civilization appeared. Some hypotheses have linked the two. For example, Wittfogel’s (1957) ‘hydraulic theory’ postulated that irrigation was needed for agriculture, and the state was in turn needed to organize irrigation. But not all civilizations used irrigation, and other possible factors (e.g. river valley placement, warfare, trade, technology, religion, and ecological and population pressure) have not led to a universally accepted model.
Prompted by a possible link between diet and mental illness, several researchers in the late 1970s began investigating the occurrence of drug-like substances in some common foodstuffs. Dohan (1966, 1984) and Dohan et al. (1973, 1983) found that symptoms of schizophrenia were relieved somewhat when patients were fed a diet free of cereals and milk. He also found that people with coeliac disease - those who are unable to eat wheat gluten because of higher than normal permeability of the gut - were statistically likely to suffer also from schizophrenia. Research in some Pacific communities showed that schizophrenia became prevalent in these populations only after they became ‘partially westernized and consumed wheat, barley beer, and rice’ (Dohan 1984).
Groups led by Zioudrou (1979) and Brantl (1979) found opioid activity in wheat, maize and barley (exorphins), and bovine and human milk (casomorphin), as well as stimulatory activity in these proteins, and in oats, rye and soy. Cereal exorphin is much stronger than bovine casomorphin, which in turn is stronger than human casomorphin. Mycroft et al. (1982, 1987) found an analogue of MIF-1, a naturally occurring dopaminergic peptide, in wheat and milk. It occurs in no other exogenous protein. (In subsequent sections we use the term exorphin to cover exorphins, casomorphin, and the MIF-1 analogue. Though opioid and dopaminergic substances work in different ways, they are both ‘rewarding’, and thus more or less equivalent for our purposes.)
Since then, researchers have measured the potency of exorphins, showing them to be comparable to morphine and enkephalin (Heubner et al. 1984), determined their amino acid sequences (Fukudome &Yoshikawa 1992), and shown that they are absorbed from the intestine (Svedburg et al.1985) and can produce effects such as analgesia and reduction of anxiety which are usually associated with poppy-derived opioids (Greksch et al.1981, Panksepp et al.1984). Mycroft et al. estimated that 150 mg of the MIF-1 analogue could be produced by normal daily intake of cereals and milk, noting that such quantities are orally active, and half this amount 'has induced mood alterations in clinically depressed subjects' (Mycroft et al. 1982:895). (For detailed reviews see Gardner 1985 and Paroli 1988.)

Most common drugs of addiction are either opioid (e.g heroin and morphine) or dopaminergic (e.g. cocaine and amphetamine), and work by activating reward centers in the brain. Hence we may ask, do these findings mean that cereals and milk are chemically rewarding? Are humans somehow 'addicted' to these foods?

Problems in interpreting these findings:
Discussion of the possible behavioral effects of exorphins, in normal dietary amounts, has been cautious. Interpretations of their significance have been of two types:
where a pathological effect is proposed (usually by cereal researchers, and related to Dohan’s findings, though see also Ramabadran & Bansinath 1988), and
where a natural function is proposed (by milk researchers, who suggest that casomorphin may help in mother-infant bonding or otherwise regulate infant development).

We believe that there can be no natural function for ingestion of exorphins by adult humans. It may be that a desire to find a natural function has impeded interpretation (as well as causing attention to focus on milk, where a natural function is more plausible). It’s unlikely that humans are adapted to a large intake of cereal exorphin, because the modern dominance of cereals in the diet is simply too new. If exorphin is found in cow’s milk, then it may have a natural function for cows; similarly, exorphins in human milk may have a function for infants. But whether or no, adult humans don’t naturally drink milk of any kind, so any natural function can’t apply.
Our sympathies therefore lie with the pathological interpretation of exorphins, whereby substances found in cereals and milk are seen as modern dietary abnormalities which may cause schizophrenia, coeliac disease or whatever. But these are serious diseases found in a minority. Can exorphins be having an effect on humankind at large?
Other evidence for 'drug-like' effects of these foods”:
Research into food allergy has shown that normal quantities of some foods can have pharmacological, including behavioral, effects. Many people develop intolerances to particular foods. Various foods are implicated, and a variety of symptoms is produced. (The term ‘intolerance’ rather than allergy is often used, as in many cases the immune system may not be involved (Egger 1988:159). Some intolerance symptoms, such as anxiety, depression, epilepsy, hyperactivity, and schizophrenic episodes involve brain function (Egger 1988, Scadding & Brostoff 1988).
Radcliffe (1982, quoted in 1987:808) listed the foods at fault, in descending order of frequency, in a trial involving 50 people: wheat (more than 70% of subjects reacted in some way to it), milk (60%), egg (35%), corn, cheese, potato, coffee, rice, yeast, chocolate, tea, citrus, oats, pork, plaice, cane, and beef (10%). This is virtually a list of foods that have become common in the diet following the adoption of agriculture, in order of prevalence. The symptoms most commonly alleviated by treatment were mood change (>50%) followed by headache, musculoskeletal and respiratory ailments.
One of the most striking phenomena in these studies is that patients often exhibit cravings, addiction and withdrawal symptoms with regard to these foods (Egger 1988:170, citing Randolph 1978; see also Radcliffe 1987:808-10, 814, Kroker 1987:856, 864, Sprague & Milam 1987:949, 953, Wraith 1987:489, 491). Brostoff and Gamlin (1989:103) estimated that 50% of intolerance patients crave the foods that cause them problems, and experience withdrawal symptoms when excluding those foods from their diet. Withdrawal symptoms are similar to those associated with drug addictions (Radcliffe 1987:808). The possibility that exorphins are involved has been noted (Bell 1987:715), and Brostoff and Gamlin conclude (1989:230):
“... the results so far suggest that they might influence our mood. There is certainly no question of anyone getting ‘high’ on a glass of milk or a slice of bread - the amounts involved are too small for that - but these foods might induce a sense of comfort and wellbeing, as food-intolerant patients often say they do. There are also other hormone-like peptides in partial digests of food, which might have other effects on the body.”
There’s no possibility that craving these foods has anything to do with the popular notion of the body telling the brain what it needs for nutritional purposes. These foods weren’t significant in the human diet before agriculture; large quantities of them cannot be necessary for nutrition. In fact, the standard way to treat food intolerance is to remove the offending items from the patient’s diet.

A suggested interpretation of exorphin research:
But what are the effects of these foods on normal people? Though exorphins cannot have a naturally selected physiological function in humans, this does not mean that they have no effect. Food intolerance research suggests that cereals and milk, in normal dietary quantities, are capable of affecting behavior in many people. And if severe behavioral effects in schizophrenics and coeliacs can be caused by higher than normal absorption of peptides, then more subtle effects, which may not even be regarded as abnormal, could be produced in people generally.

The evidence presented so far suggests the following interpretation:
The ingestion of cereals and milk, in normal modern dietary amounts by normal humans, activates reward centers in the brain. Foods that were common in the diet before agriculture (fruits and so on) do not have this pharmacological property. The effects of exorphins are qualitatively the same as those produced by other opioid and / or dopaminergic drugs, that is, reward, motivation, reduction of anxiety, a sense of wellbeing, and perhaps even addiction. Though the effects of a typical meal are quantitatively less than those of doses of those drugs, most modern humans experience them several times a day, every day of their adult lives.

Hypothesis: exorphins and the origin of agriculture and civilization:
When this scenario of human dietary practices is viewed in light of the problem of the origin of agriculture, it suggests an hypothesis that combines the results of these lines of enquiry. Exorphin researchers, perhaps lacking a long-term historical perspective, have generally not investigated the possibility that these foods really are drug-like, and have instead searched without success for exorphin’s natural function. The adoption of cereal agriculture and the subsequent rise of civilization haven’t been satisfactorily explained, because the behavioral changes underlying them have no obvious adaptive basis.
These unsolved and until-now unrelated problems may in fact solve each other. The answer, we suggest, is this: cereals and dairy foods are not natural human foods, but rather are preferred because they contain exorphins. This chemical reward was the incentive for the adoption of cereal agriculture in the Neolithic. Regular self-administration of these substances facilitated the behavioral changes that led to the subsequent appearance of civilization.

This is the sequence of events that we envisage:
Climatic change at the end of the last glacial period led to an increase in the size and concentration of patches of wild cereals in certain areas (Wright 1977). The large quantities of cereals newly available provided an incentive to try to make a meal of them. People who succeeded in eating sizeable amounts of cereal seeds discovered the rewarding properties of the exorphins contained in them. Processing methods such as grinding and cooking were developed to make cereals more edible. The more palatable they could be made, the more they were consumed, and the more important the exorphin reward became for more people.
At first, patches of wild cereals were protected and harvested. Later, land was cleared and seeds were planted and tended, to increase quantity and reliability of supply. Exorphins attracted people to settle around cereal patches, abandoning their nomadic lifestyle, and allowed them to display tolerance instead of aggression as population densities rose in these new conditions.
Though it was, we suggest, the presence of exorphins that caused cereals (and not an alternative already prevalent in the diet) to be the major early cultigens, this does not mean that cereals are ‘just drugs’ They have been staples for thousands of years, and clearly have nutritional value. However, treating cereals as ‘just food’ leads to difficulties in explaining why anyone bothered to cultivate them. The fact that overall health declined when they were incorporated into the diet suggests that their rapid, almost total replacement of other foods was more due to chemical reward than to nutrition.
It’s noteworthy that the extent to which early groups became civilized correlates with the type of agriculture they practiced. That is, major civilizations (in south-west Asia, Europe, India, and east and parts of South-East Asia; central and parts of north and south America; Egypt, Ethiopia and parts of tropical and west Africa) stemmed from groups which practiced cereal, particularly wheat, agriculture (Bender 1975:12, Adams 1987:201, Thatcher 1987:212). (The rarer nomadic civilizations were based on dairy farming.) Groups which practiced vege-culture (of fruits, tubers etc.), or no agriculture (in tropical and south Africa, north and central Asia, Australia, New Guinea and the Pacific, and much of north and south America) did not become civilized to the same extent.
Thus major civilizations have in common that their populations were frequent ingesters of exorphins. We propose that large, hierarchical states were a natural consequence among such populations. Civilization arose because reliable, on-demand availability of dietary opioids to individuals changed their behavior, reducing aggression, and allowed them to become tolerant of sedentary life in crowded groups, to perform regular work, and to be more easily subjugated by rulers. Two socioeconomic classes emerged where before there had been only one (Johnson & Earle 1987:270), thus establishing a pattern which has been prevalent since that time.

The natural diet and genetic change:
Some nutritionists deny the notion of a pre-agricultural natural human diet on the basis that humans are omnivorous, or have adapted to agricultural foods (e.g. Garn & Leonard 1989; for the contrary view see for example Eaton & Konner 1985). An omnivore, however, is simply an animal that eats both meat and plants: it can still be quite specialized in its preferences (chimpanzees are an appropriate example). A degree of omnivory in early humans might have pre-adapted them to some of the nutrients contained in cereals, but not to exorphins, which are unique to cereals.
The differential rates of lactase deficiency, coeliac disease and favism (the inability to metabolize fava beans) among modern racial groups are usually explained as the result of varying genetic adaptation to post-agricultural diets (Simopoulos 1990:27-9), and this could be thought of as implying some adaptation to exorphins as well. We argue that little or no such adaptation has occurred, for two reasons: first, allergy research indicates that these foods still cause abnormal reactions in many people, and that susceptibility is variable within as well as between populations, indicating that differential adaptation is not the only factor involved. Second, the function of the adaptations mentioned is to enable humans to digest those foods, and if they are adaptations, they arose because they conferred a survival advantage. But would susceptibility to the rewarding effects of exorphins lead to lower, or higher, reproductive success? One would expect in general that an animal with a supply of drugs would behave less adaptively and so lower its chances of survival. But our model shows how the widespread exorphin ingestion in humans has led to increased population. And once civilization was the norm, non-susceptibility to exorphins would have meant not fitting in with society. Thus, though there may be adaptation to the nutritional content of cereals, there will be little or none to exorphins. In any case, while contemporary humans may enjoy the benefits of some adaptation to agricultural diets, those who actually made the change ten thousand years ago did not.

Other 'non-nutritional' origins of agriculture models:
We are not the first to suggest a non-nutritional motive for early agriculture. Hayden (1990) argued that early cultigens and trade items had more prestige value than utility, and suggested that agriculture began because the powerful used its products for competitive feasting and accrual of wealth. Braidwood et al. (1953) and later Katz and Voigt (1986) suggested that the incentive for cereal cultivation was the production of alcoholic beer:
“Under what conditions would the consumption of a wild plant resource be sufficiently important to lead to a change in behavior (experiments with cultivation) in order to ensure an adequate supply of this resource? If wild cereals were in fact a minor part of the diet, any argument based on caloric need is weakened. It is our contention that the desire for alcohol would constitute a perceived psychological and social need that might easily prompt changes in subsistence behavior” (Katz & Voigt 1986:33).
This view is clearly compatible with ours. However there may be problems with an alcohol hypothesis: beer may have appeared after bread and other cereal products, and been consumed less widely or less frequently (Braidwood et al. 1953). Unlike alcohol, exorphins are present in all these products. This makes the case for chemical reward as the motive for agriculture much stronger. Opium poppies, too, were an early cultigen (Zohari 1986). Exorphin, alcohol, and opium are primarily rewarding (as opposed to the typically hallucinogenic drugs used by some hunter-gatherers) and it is the artificial reward which is necessary, we claim, for civilization. Perhaps all three were instrumental in causing civilized behavior to emerge.
Cereals have important qualities that differentiate them from most other drugs. They are a food source as well as a drug, and can be stored and transported easily. They are ingested in frequent small doses (not occasional large ones), and do not impede work performance in most people. A desire for the drug, even cravings or withdrawal, can be confused with hunger. These features make cereals the ideal facilitator of civilization (and may also have contributed to the long delay in recognizing their pharmacological properties).
Our hypothesis is not a refutation of existing accounts of the origins of agriculture, but rather fits alongside them, explaining why cereal agriculture was adopted despite its apparent disadvantages and how it led to civilization.
Gaps in our knowledge of exorphins limit the generality and strength of our claims. We do not know whether rice, millet and sorghum, nor grass species which were harvested by African and Australian hunter-gatherers, contain exorphins. We need to be sure that pre-agricultural staples do not contain exorphins in amounts similar to those in cereals. We do not know whether domestication has affected exorphin content or-potency. A test of our hypothesis by correlation of diet and degree of civilization in different populations will require quantitative knowledge of the behavioral effects of all these foods.
We do not comment on the origin of non-cereal agriculture, nor why some groups used a combination of foraging and farming, reverted from farming to foraging, or did not farm at all. Cereal agriculture and civilization have, during the past ten thousand years, become virtually universal. The question, then, is not why they happened here and not there, but why they took longer to become established in some places than in others. At all times and places, chemical reward and the influence of civilizations already using cereals weighed in favor of adopting this lifestyle, the disadvantages of agriculture weighed against it, and factors such as climate, geography, soil quality, and availability of cultigens influenced the outcome. There is a recent trend to multi-causal models of the origins of agriculture (e.g. Redding 1988, Henry 1989), and exorphins can be thought of as simply another factor in the list. Analysis of the relative importance of all the factors involved, at all times and places, is beyond the scope of this paper.
“An animal is a survival machine for the genes that built it. We too are animals, and we too are survival machines for our genes. That is the theory. In practice it makes a lot of sense when we look at wild animals.... It is very different when we look at ourselves. We appear to be a serious exception to the Darwinian law.... It obviously just isn't true that most of us spend our time working energetically for the preservation of our genes” (Dawkins 1989:138).
Many ethologists have acknowledged difficulties in explaining civilized human behavior on evolutionary grounds, in some cases suggesting that modern humans do not always behave adaptively. Yet since agriculture began, the human population has risen by a factor of 1000: Irons (1990) notes that ‘population growth is not the expected effect of maladaptive behavior’.
We have reviewed evidence from several areas of research which shows that cereals and dairy foods have drug-like properties, and shown how these properties may have been the incentive for the initial adoption of agriculture. We suggested further that constant exorphin intake facilitated the behavioral changes and subsequent population growth of civilization, by increasing people's tolerance of (a) living in crowded sedentary conditions, (b) devoting effort to the benefit of non-kin, and (c) playing a subservient role in a vast hierarchical social structure.
Cereals are still staples, and methods of artificial reward have diversified since that time, including today a wide range of pharmacological and non-pharmacological cultural artifacts whose function, ethologically speaking, is to provide reward without adaptive benefit. It seems reasonable then, to suggest that civilization not only arose out of self-administration of artificial reward, but is maintained in this way among contemporary humans. Hence a step towards resolution of the problem of explaining civilized human behavior may be to incorporate into ethological models this widespread distortion of behavior by artificial reward.


Wanting to further explore this idea, I gathered from internet sources:
One model for the evolution of alcohol consumption suggests that ethanol only entered the human diet after people began to store extra food, potentially after the advent of agriculture, and that humans subsequently developed ways to intentionally direct the fermentation of food about 7000BCE. As almost any cereal containing certain sugars can undergo spontaneous fermentation due to wild yeasts in the air, beer-like beverages developed independently throughout the world when people domesticated cereals. Chemical tests of ancient pottery jars reveal beer produced about 5000BCE in today’s Iran. In Mesopotamia, the oldest evidence of beer we have so far is a 6,000-year-old Sumerian tablet depicting people drinking through reed straws from a communal bowl. In China, residue on pottery dating from between 5400 and 4900 years ago shows beer was brewed using barley and other grains.
Discovery of late Stone Age jugs suggest that intentionally fermented beverages existed at least as early as the Neolithic period (c. 10000 BCE). Chemical analysis of jars from the neolithic village Jiahu in the Henan province of northern China revealed traces of alcohol that were absorbed and preserved. According to a study published in the Proceedings of the National Academy of Sciences, chemical analysis of the residue confirmed that a fermented drink made of grapes, hawthorn berries, honey, and rice was being produced in 7000–6650BCE. The results of this analysis were published in December 2004. This is approximately the time when barley beer and grape wine were beginning to be made in the Middle East. The earliest firm evidence of wine production dates back to 6000BCE in Georgia. Medicinal use of alcohol was mentioned in Sumerian and Egyptian texts dating from about 2100BCE. Evidence of alcoholic beverages has also been found dating from 3150BCE in ancient Egypt, 3000BCE in Babylon, 2000BCE in pre-Hispanic Mexico, and 1500BCE in Sudan.
There may have been a single genetic mutation 10 million years ago that endowed humans with an enhanced ability to break down ethanol. Scientists note that the timing of this mutation coincides with a shift to a terrestrial lifestyle. The ability to consume ethanol may have helped human ancestors dine on rotting, fermenting fruit that fell on the forest floor when other food was scarce.
The Mediterranean region contains the earliest archeological evidence of human opium use; the oldest known seeds date back to more than 5000BCE in the Neolithic age with purposes such as food, anesthetics, and ritual. The earliest recorded use of narcotics dates back to 4,000 BCE. Evidence from ancient Greece indicates that opium was consumed in several ways, including inhalation of vapors, suppositories, medical poultices, and as a combination with hemlock for suicide. Indian scholars maintain that ancient verses and the history shown in them were orally transmitted thousands of years before even 4000BCE.
Tea drinking likely began in Yunnan province during the Shang Dynasty (1500–1046 BCE) - legend has the Yellow Emperor, inventor of agriculture and Chinese medicine, drinking a bowl of just boiled water due to a decree that his subjects must boil water before drinking it, in about 2737 BCE, when some tea leaves fell in and he enjoyed the taste. In 1978, Archeologists found tea relics in the Tianluo mountains and estimated them at 7,000 years old. Another promising find in the same mountains was old roots of the Camellia Sinenses plant with broken pottery. These roots were determined to be about 6,000 years old 9from 4000BCE). It wasn’t until the Tang dynasty (618-907), that consumption become widespread.
This goes a bit beyond my ken, and I expect that of most readers too:
Addiction is a disorder of the brain’s reward system which arises through transcriptional and epigenetic mechanisms and occurs over time from chronically high levels of exposure to an addictive stimulus (e.g., eating food, the use of cocaine, engagement in sexual intercourse, participation in high-thrill cultural activities such as gambling, etc.). ΔFosB, a gene transcription factor, is a critical component and common factor in the development of virtually all forms of behavioral and drug addictions. Research into ΔFosB’s role in addiction has demonstrated that addiction arises, and the associated compulsive behavior intensifies or attenuates, along with the over-expression of ΔFosB in the D1-type medium spiny neurons of the nucleus accumbens. Due to the causal relationship between ΔFosB expression and addictions, it is used pre-clinically as an addiction biomarker. ΔFosB expression in these neurons directly and positively regulates drug self-administration and reward sensitization through positive reinforcement, while decreasing sensitivity to aversion. I expect that it mostly means that correlation has been found between brain chemistry and the compulsions we call addiction.

A common belief is that psychotropic plant chemicals evolved recurrently throughout evolutionary history. Archaeological records indicate the presence of psychotropic plants and drug use in ancient civilizations as far back as early hominid species about 200 million years ago. Roughly 13,000 years ago, the inhabitants of Timor commonly used betel nut (Areca catechu), as did those in Thailand around 88,700BCE. At the beginning of European colonialism, and perhaps for 40,000 years before that, Australian aborigines used nicotine from two different indigenous sources: pituri plant (Duboisia hopwoodii) and Nicotiana gossel. North and South Americans also used nicotine from their indigenous plants N. tabacum and N. rustica. Ethiopians and northern Africans were documented as having used an ephedrine-analog, khat (Catha edulis), before European colonization. Cocaine (Erythroxylum coca) was taken by Ecuadorians about 3000BCE and by the indigenous people of the western Andes almost 7,000 years ago (5000BCE). The substances were popularly administered through the buccal cavity within the cheek. Nicotine, cocaine, and ephedrine sources were first mixed with an alkali substance, most often wood or lime ash, creating a free base to facilitate diffusion of the drug into the blood stream. Alkali paraphernalia have been found throughout these regions and documented within the archaeological record. Although the buccal method is believed to be most standard method of drug administration, inhabitants of the Americas may have also administered substances nasally, rectally, and by smoking.
Sex addiction as a term first emerged in the mid-1970s when various members of Alcoholics Anonymous sought to apply the principles of 12-steps toward sexual recovery from serial infidelity and other unmanageable compulsive sex behaviors that were similar to the powerlessness and un-manageability they experienced with alcoholism. Certainly compulsive sexual behavior, as with “Jack the Ripper” (!), existed long before 1970.

Labels: , , , , , , , , , , , ,

Sunday, December 24, 2017

Vasco da Gama trades Christianity for spice

Vasco da Gama (1460–1524), a peevish, paranoid trader who refused to go ashore for the first Portuguese contact at an Indian port, shores thus missed hearing a pair of multi-lingual Tunisian merchants ask in both Castilian and Genoese, “The Devil take you! What brought you here?” Replied his chosen representative, a convict, “We came to seek Christians and spices.”
The vicious commander of the first ships to sail directly from Europe to India, he tried to do business in India from a small 50 ton caravel and even smaller supply ship, with a crew of about 160 (including gunners, musicians and three Arabic interpreters), using trade goods common to West Africa but of virtually no interest in Asia—coarse cloth, bells, beads . . . “Business” is a fairly euphemistic term for what he did, though; he pretended to offer protection. His guns, for the time, were state-of-the-art and more powerful than anything those encountered on his voyages had experienced before. Through force, he opened the way to Portugal’s success in appropriating, or more politely, colonizing. He, or anyway his crew, introduced syphilis to Asia. Inflicting casual violence almost everywhere he went, he abused and slaughtered people he meant to glean advantage through, setting a tone for distrust that has remained.
The Portuguese saw excursions to Asia as more than business: they were a continuation of six centuries of war (with an average of a battle or two a day) between Christians and Islamic Moors, mainly in the Iberian Peninsula. The Republic of Venice, by trading with Muslims who’d had a trade monopoly with India and the Far East, had controled most trade between Europe and Asia. Da Gama out-flanked Venice by going around Africa (this was 5 years after Columbus “discovered” the Americas, and the route went far across the Atlantic, almost to Brazil, yet unknown).
Muslims invaded Portugal in 711 and again in 1191; they gained control of almost the whole Iberian Peninsula. Total re-conquest wasn’t made until the 1270s. In 1340 a Portuguese army joined Alfonso XI of Castile won victory over Muslims in Andalusia. Portuguese attacked Morocco in 1458, 1463, and 1471. Turks took Constantinople in 1453, while other Muslims were being pushed from Iberia. In 1578, when at the height of their power in Asia (power gained through da Gama’s efforts), Portuguese knights fought a spirited, essentially chauvinistic campaign at Alcazar, Morocco, and were soundly defeated; at least one member of nearly every noble Portuguese family died there, for abstract reasons almost purely cultural and religious.
Da Gama may have learned astronomy under renowned Abraham Zacuto; Genoese sailors, jealous of Venice’s monopoly on Arabic (Persian) trade, were willing to share secrets with the Portuguese. Da Gama’s decisive expedition sailed from Lisbon on July 8, 1497, with three interpreters: two Arabic speakers and one with several Bantu dialects, as Arab-controlled territory on the East African coast was an integral part of the Indian Ocean trade network. Upon reaching Mozambique, March 2, 1498, da Gama and crew impersonated Muslims to gain audience with the Sultan. They offered unsuitable trade goods and gifts, and were told that Prester John, the long-sought Christian ruler from whom a Pope once received a missive, lived to the interior (but controlled many coastal cities). The Sultan supplied the unconvinced da Gama with two pilots, but hostile crowds threatened. On discovering the Portuguese to be Christian, a pilot deserted. Upon departure, they fired cannons into the city.
Moving up Africa’s east coast, to Mombasa and Malindi (in present-day Kenya), da Gama found much more sophisticated economic life than in West Africa. Coastal towns had merchants—Arabs, Indians from Gujarat and Malabar and Persians—who imported silk and cotton textiles, spices and Chinese porcelain, while exporting cotton, timber and gold. Looting Arab (or more likely, Persian) merchant ships, which lacked the heavy cannon da Gama carried, he took gold, silver, jewels and spices, but was somehow able to get a competent Gujarati pilot from the ruler of Malindi, and so on to Calicut (in Kerala). Da Gama was welcomed by the Hindu ruler (Zamorin) of Calicut, but failed to achieve a treaty, partly due to hostility from Muslim merchants, but more due to the low quality of his trade goods.
The Portuguese remained in Calicut for three months, discovering much about prices and conditions in the spice market, but failed to establish amicable relations with the local ruler (or sell anything). Muslim merchants made allegations against da Gama and he was arrested; the mayor told him to leave without cargo, and detained seven men as hostages. So, upon setting sail, da Gama seized 20 fishermen. A hostage-trade was arranged, but only superior gunnery saved da Gama from Islamic wrath. He destroyed a Calicut fleet of 29 ships, and so finally got favorable trading concessions from the Zamorin. He reached Lisbon in 1499, after two years in which he’d lost half his crew and ships. Nevertheless Manuel I granted Vasco the title of dom (equivalent to the English “sir”), estates and an annual pension. Made admiral January 1502, da Gama sailed again in February, now with 20 ships. He stopped briefly at Mozambique, then sailed to Kilwa (Tanzania), and threatened the Islamic ruler with destruction if he didn’t submit and swear loyalty. He did, and also agreed to an annual tribute of pearls and gold.
After coasting southern Arabia, da Gama went to Goa, then on to Cannanore, north of Calicut, to lay in wait for Arab and Persian shipping. After several days an Arab ship returning from Mecca with valuable merchandise and hundreds of passengers, including many women and children, arrived; da Gama seized the cargo, locked passengers aboard and then set all ablaze. Headed to Cochin, he stopped opposite Calicut and demanded that the ruler expel the whole Muslim merchant community (4000 households). The Samudri, the local Hindu ruler, refused, so da Gama bombarded the city. At Cochin, he bought spices with silver, copper and textiles he’d taken from the ship he burned. After setting up a permanent factory (warehouse) in Cochin, he left five ships there to protect Portuguese interests.
Da Gama returned to Lisbon again October 1503, with 13 of his ships and nearly 1700 tons of spices, i.e. about the same as annual Venetian imports of the time. Portuguese King Manuel I “the Fortunate” (ruled 1495 to 1521), then assumed the title of “lord of the conquest, navigation, and commerce of India, Ethiopia, Arabia, and Persia”! Defeat of an Islamic navy in 1509 gave him control of sea trade, which became the chief source of Portuguese wealth. From Portuguese dominion, the British East Indian Company was able to grow; and then a century and a half of Protestant-led world domination by Europeans – a misled supremacy we can only hope we’re on the verge of recovering from, now.
Alexander of Greece had invaded the Punjab in 327 BCE; before that, Indian teak and cedar was used in Babylon (in the 7th and 6th centuries BCE, as mentioned in Buddhist Jataka texts). Arab merchants brought Indian goods to Egypt and the eastern Mediterranean, and in the 2nd century BCE, Greeks from Bactria founded kingdoms, which lasted over a century, in the Punjab and bordering Afghan hills. Then European commercial involvement with India died with the decline of the Roman Empire in the 4th century CE. Trade passed completely into Arab hands. In the 1400s, land routes to India—via Egypt and the Red Sea, across Turkey and Persia, or through Syria and Iraq to the Persian Gulf—were blocked, mainly by Ottoman action. An Egyptian route was subject to increasing exploitation by a line of middlemen ending with the Venetian monopoly; in 1517 that too passed to Ottoman control. The motive for finding a new route was strong.
For Europe in 1498, India was a land of spices and wonderful marvels; for Muslims, Europe was the land of Rūm and Greek Constantinople (Turkish after 1453). For Hindus, Europe was the home of warlike Yavanas (from the Greek word Ionian). The Portuguese were the first to renew direct contact, being among the few nations to possess both the navigational know-how (including navigational techniques learned from disgruntled Genoese, as mentioned earlier) and the necessary motivation (both revenge on Islam, and supplementing the sparse economy of home).
Portuguese first tried to get a port in northwest India in 1507; by 1534 they had other important trading centers on the western coast (Panjim, Daman, and Diu), but seized from Muslims the port which became Bombay and now Mumbai, India’s principal Arabian Sea port, financial and commercial centre and one of the world’s largest and most densely populated cities. The name Bombay is an Anglicization from “Bom Baia” – Portuguese for good port. In 1661 it came under British control from King Charles II marrying Catherine of Braganza, sister of Portugal’s king; in 1668 it was ceded to the East India Company. In the early 1700s the Portuguese started exporting opium from India to sell in China (at a considerable profit), and by 1773 the British were also involved in that trade – psycho-actives replacing spice, as tea gained popularity in Europe and especially England – tea that there was little other way to pay for.
Da Gama had hoped to find Christians separated from the West by Muslims, and to be able to deal a blow at Muslim power from their maritime rear. He found some in the Syrians of Cochin and Travancore, but mostly only further alienated Muslims. His successors established a Portuguese empire in the East – Goa, Timor and Macao were Portuguese until late in the 20th century. For over 100 years, Portugal had strategic command over the Indian Ocean, and controlled the maritime spice trade, much to the detriment of trade by the Ottoman-controlled Middle Eastern Muslim world. The Portuguese relied on naval power and fortified posts backed by settlements; their ships, sturdy enough to survive Atlantic gales and mounted with cannon, easily disrupted Arab and Malay shipping. But Portugal had fewer than a million people and was involved in Africa and South America as well. It was desperately short of manpower. Fortresses had to become settlements and provide a resident population for defense; intermarriage was encouraged and a new mixed population provided stubborn resistance to attacks.
Despite their small numbers, Portuguese mercenaries operated through much of Asia. Vijayanagar, the first South Indian state to encompass the three major linguistic and cultural traditions of India—Indo-Aryan (Sanskrit), Dravidian (Tamil), and Sunni Muslim Arabs and Turks (mostly of Bahmani state) briefly retook Goa from Muslims, just before the Portuguese made it their first territorial possession in Asia, in 1510. Krishna Deva Raya (reigned 1509–29), the greatest Vijayanagar king, garrisoned forts with Portuguese and Muslim mercenary gunners (and foot soldiers from local forest tribes). Using Portuguese gunners in successful campaigns made for vivid impressions on Muslim rulers, of danger posed by Vijayanagar, resulting in more concerted action against that kingdom. Krishna Deva mostly maintained a mutually advantageous relationship with the increasingly powerful Portuguese, retaining access to trade goods, especially superior horses. In 1546 Portugal made a treaty to expand settlements, but, due to harm to locals, the treaty was broken in 1558; tribute was exacted in compensation for Portuguese damage to temples.
Portugal’s control of the Indian Ocean lasted through the 16th century. Three marks of Portuguese empire were trade, Roman Catholic Christianity, and anti-Islamism. The Portuguese felt that no faith need be kept with infidels, and were happily cruel to Muslims—well beyond the normal limits in a very rough age. This, missionary fervor and intolerance deprived them of potential Indian sympathy.
In 1580 Spain annexed Portugal; until 1640, Portuguese interests were made secondary to those of Spain. The Spanish failed to quell a Dutch uprising, and after the defeat of the Spanish Armada in 1588, sea-ways opened to the English and Dutch. Portuguese ascendancy crumbled.
The English East India Company gained monopoly rights to trade as of 1600, with an initial capital of under a tenth of the similar Dutch company’s. Its object was also to trade in spice, primarily from the East Indies (Indonesia); it used India for a secondary purpose—securing cotton for sale to spice growers. The British East Indian venture was troubled by determined Dutch opposition, and by Portuguese who enjoyed Mugal recognition (especially at the western Indian port of Surat). A 1612 English victory over the Portuguese, whose control of the sea route to Mecca was resented by the Mughals, especially pilgrims, provided the English with the right to trade and establish “factories”—in return for becoming naval auxiliaries for the Mughal Empire. Merchants lived in the “factories” or in collegiate-type settlements where life was confined, colorful, and often short, but a century of peaceful trading through factories operating under Mughal grants followed. An exception to this arrangement was the long independent island port of Bombay. Its inhabitants, the Maratha, Hindus often of the Kshatriya warrior class, whose antecedents may have been founders of the 7th to 13th century Srivijaya maritime empire, were locked in vicious combat with the Mughals. The Marathi-speaking region extends from Bombay to Goa, and inland about 100 miles.
In the long run, English trade, in bulk instead of highly priced luxury goods preferred by the Dutch, became the more profitable—as smaller areas to cover and less need for armed forces reduced overhead. But India would take little other than silver in exchange for goods, and loss of too much bullion wasn’t acceptable in England’s mercantilist political economy. So the English developed a system similar to that of the Dutch, with Madras and Gujarat supplying cotton goods (Gujarat also supplied indigo); and Bengal silk, sugar, and saltpeter (for gunpowder). There was spice trade along the Malabar coast, in competition with the Portuguese and Dutch. But it was opium shipped to China that laid a basis for continued English trade; English tea imports increased from 54,000 pounds in 1706 to over 2,300,000 pounds in 1750, paid for mostly through sale to China of Indian opium.
Goa, with a coastline of 65 miles, was finally annexed by India in 1962, after Indian troops supported by naval and air forces invaded and occupied Goa, Daman, and Diu. Portuguese India was then incorporated into the Indian Union, and what da Vaca had wrought was brought to naught. Increased trade he had facilitated reduced no cultural divides.

Labels: , , , , , , , , ,