Taoist mythology, Lanna history, mythology, the nature of time and other considered ramblings

My Photo
Location: Chiangrai, Chiangrai, Thailand

Author of many self-published books, including several about Thailand and Chiang Rai, Joel Barlow lived in Bangkok 1964-65, attending 6th grade with the International School of Bangkok's only Thai teacher. He first visited ChiangRai in 1988, and moved there in 1998.

Tuesday, April 13, 2010

The State We’re In

It’d be quite interesting to know the percentage of the over $700 billion US “defense” (wars for special interest reasons) spending which goes to private companies. In today’s “capitalism,” an amazing amount of taxpayers’ earnings go to private firms via governments prepared to commit any crime in order to maximize profits (for a few). For those who haven’t recognized it yet, the motivation for US military aggression in Afghanistan is natural gas under Uzbekistan and Turkmenistan – gas to be piped south, but to what port no-one yet knows. As Paul Craig Roberts, Assistant Secretary of the Treasury in the Reagan administration, reports on, reports, the consultant who arranged with then Texas governor George W. Bush agreements to give Enron rights to Uzbekistan’s and Turkmenistan’s natural gas deposits, and Unocal a contract to make a trans-Afghanistan pipeline, was “president” of Afghanistan Karzai, whose only support comes from the US (and a few opium farmers).
Which might at least be good news for investors, but that the current administration will clearly continue to fail to restore the Glass-Steagall Act (officially known as the Banking Act of 1933, Public Law 66-73, or H.R. 5661), one of the most important pieces of financial legislation ever passed in the USA. Glass-Steagall mandated that banks shall not engage “in the issue, flotation, underwriting, public sale, or distribution at wholesale or retail or through syndicate participation of stocks, bonds, debentures, notes, or other securities.” In other words, they are not to gamble recklessly with depositors funds, as, almost calamitously, or recent. In other words, although we may get some natural gas to replace dwindling petroleum, not only investing, but saving in banks, will remain less than risky.
And meanwhile, “It Takes a Village” Clinton continues to give that phrase a new meaning, letting Monsanto Corp take subsistence villagers into wage-slavery (and worse). Not only have weeds and insects become resistant to Monsanto's Roundup herbicide and to the toxins produced by their GM corn, but pesticide use on GM planted acres has gone up, not down, while yields have become lower than that of their conventional counterparts. But you won’t read about these matters in the New York Times, nor, certainly, encounter them mentioned in even more conservative “news sources”…
There has been none of the “change” Obama promised, until, and I apologize for repeating myself, we start:
1st: Halting production of genetically modified foods – it’s an experiment that has backfired. Those who can’t admit to limitations certainly can’t overcome them.
2nd: Ending corporate “personhood” – being simultaneously a person and property is wrong. We KNOW this.
3rd: Ending the wars we are engaging in, including the “Drug War” and all forms of armed interventionism. Justice is never the result of force.
4th: Turn “alternative” energy sources into essential ones, reducing our petrochemical addiction, pollution, global warming, economic globalization and sloth.
5th: Improving education, so that people can see the necessity of the above for survival and ethical self-respect.
6th: Reducing not population expansion, but the population, and also consumption, greed, the polluted mess we’ve made of our planet, economic disparity, injustice and exploitation.
7th: Reinvigorating ecological diversity, acknowledging the importance of all forms of diversity, and decentralizing power without giving reign to petty dictators.

Strangely, the Wall Street Journal revealed a cover up by the nation’s largest banks (including Goldman Sachs, JP Morgan, Bank of America, and Citigroup), which have been engaging in potentially-criminal accounting activities, to conceal the extent of their debts. The businessman’s paper of choice reported, "Major banks have masked their risk levels in the past five quarters by temporarily lowering their debt just before reporting it to the public, according to data from the Federal Reserve Bank of New York. A group of 18 banks....understated the debt levels used to fund securities trades by lowering them an average of 42 per cent at the end of each of the past five quarterly periods, the data show. The banks, which publicly release debt data each quarter, then boosted the debt levels in the middle of successive quarters." ("Big Banks Mask Risk Levels", Kate Kelly, Tom McGinty, Dan Fitzpatrick, Wall Street Journal, 9 April, 2010).
Seems we’re now living in a world without any accountability for those at or above a certain social and economic level.


Saturday, April 10, 2010

The punch-card loom, censors and reactive systems

The punch-card loom, censors and reactive systems:

Showing how interconnected seemingly disparate things can be, and how mysterious the progress of mankind, a direct line from weaving looms to the internet can be shown.
French weaver Joseph-Marie Jacquard (1752–1834) created the original programmable loom, advancing the weaving industry while also inadvertently inspiring subsequent utilization of punch card technology - the first practical binary system use. This eventually led to the modern computer.
The Jacquard Attachment, or Jacquard Mechanism, placed above the loom, produced Jacquard weaves, including brocade, damask, and brocatelle. They were characterized by complex woven-in designs, often with large design repeats or tapestry effects. Initially, Jacques de Vaucanson, a prolific inventor of robot devices, automated the loom by using perforated cards to guide hooks connected to warp yarn. After Vaucanson’s death, this was improved on by Jacquard, who’d made an innovative loom of his own before finding Vaucanson’s at the Conservatory of Arts and Trades in Paris. Jacquard’s loom became one of the most important inventions of the Industrial Revolution, and in ways led to modern computing.
Wool cloth being one of the few things which medieval Europe produced and people elsewhere wanted, fleece from England was processed into fine and coarse fabric and exported – largely from the Low Counties (the Benelux Countries of northwestern Europe, bordered by Germany to the east and France to the south, containing Brussels, Antwerp, Amsterdam, Rotterdam and The Hague), which, until 1580, were the most prosperous industrial area in Europe, with banking, finance and international commerce reflecting that of contemporaneous Venice. The best clothing came from there, and from 1400 to 1700, Dutch per capita income growth was the fastest in Europe. From 1600 to the 1820s its income level was the highest.
In Flanders, an early center of wool cloth industry output, in the southwest of the Low Countries, marketing and production practices tended to be heavily regulated by guilds, banded together for mutual protection and assistance. This was necessary as war was frequent, piracy and fraud constant. A man alone had no chance of prospering amid the prevailing confusion and danger. The guilds of drapers, tailors, spinners, weavers and tapestry-makers were only in special towns. Flanders’ main cities (Bruges, Ghent and Ypres) led the European woolen textile industry, making high quality draperies, linen, furnishing materials and luxurious for over 300 years before Jacquard and his idea. In Arras (90 miles north of Paris), silks, velvets and tapestries were woven; Brussels too was famous for tapestries. Flanders profited from its geographical position between England and the Rhineland, and between the Mediterranean and Scandinavian and Baltic countries; the Low Countries developed significant trade navies before the British, and its urban centers dominated Europe’s textile making. Holland, Belgium and the parts of France involved in Renaissance weaving of much significance compose only a small portion of even just Western Europe, but, as guild workers found reason to fear inundated markets, improvements to mechanization were threatening. Apprenticeships were arduous, and weavers had no confidence in being able to find other work. Like later Luddites, most Flemish weavers had no interest in the modernization which eventually came anyway.
By the mid–14th century, Antwerp, Leuven and Brussels gained an edge on Flanders, due to silting in water routes, greater enterprise, and British competition. England began to export, not import, woolen textiles, but much of its cloth exports were sent un-dyed to Flanders, for finishing. Special skills developed by Flemings and Netherlanders (starting in the middle of the 10th century) had made Flanders famous for its woolen clothing; when the raw material started coming from England, a monopoly on manufacture remained. “All the nations of the world,” it was claimed, “are kept warm by the wool of England, made into cloth by the men of Flanders.” Thread manufacture was known in England as “Dutch work”.
English textiles of the 13th and 14th centuries were mainly of linen and wool, and English wool exports concentrated in one town to minimize problems in collecting export duties. This was Calais, the nearest mainland port (then under English control). Silk was woven in London and Norwich in 1455; in 1564 Queen Elizabeth I granted a charter to Dutch and Flemish settlers in Norwich for production of damasks and flowered silks. With the growth of English manufacturing in the 16th century, more wool was used domestically, and export of the raw product diminished. But the best weaving was on the continent; Italian and Flemish weavers produced tapestries in Fontainebleau (with a large royal residence near the Seine River, just south-southeast of Paris). Then weavers were brought to Lyon (Lyons, France’s 2nd largest city, 240 miles south-southeast of Paris, by an ancient trade route linking the North Sea and the Mediterranean), which then became the center of European silk manufacture and France’s second most important educational centre.
Guilds opposed cotton manufacture in the 18th century; similar war was waged over a longer period on behalf of wool against silk and linen. Spinning and weaving, done for four millennia, are essential parts of civilization, but workmen two centuries ago used mostly the same sort of tools, and followed nearly the same ways, as of a thousand years before. Production was done by hand, mostly in private houses; the largest factories rarely comprised more than twenty or thirty workmen.
Wool or cotton was first cleaned, or carded, its filaments opened up by a minute sort of fork, and spread out in parallel lines loose ribbons of eight or more fibers, according to thickness required. The fibrous ribbon was coiled round a spindle, then the spinner, having adjusted one end of the ribbon to a spinning-wheel, turns it with one hand, feeding it with the other while giving the fiber a slight twist. Yarn produced was given to a weaver, who worked a loom with both hands and feet: threads rose and fell, alternately lifted and depressed by a treadle, as he threw a shuttle backwards and forwards between them. Finished cloth was rolled onto a beam, to be sent to market.
Until 1589, most elaborate fabrics in France were Italian; Flemish weavers were brought in, and soon French patterned fabrics showed a distinctive style – one based on symmetrical ornamental forms. From about 1600, Dutch trade, financial institutions, ingenuity and science was a world leader, with the first joint stock companies and stock exchanges, and triumphs military, artistic and political – somewhat due to integration of talents from all over the Western world. Through the 17th century Holland was one of the great commercial powers of Europe; innovations in finance, technology, marketing and communications offered enormous advantages. Much of the gold plundered from the Americas found its way there. Important innovators who expedited production weren’t Flemish, didn’t work in Flanders, but lived and worked, rather, to the north and south.
John Wyatt thought of a way to facilitate fiber spinning, in 1730. Instead of passing the carded fiber twice round a hand-worked one-thread spinning-wheel, he arranged to draw it between a pair of revolving cylinders, one plain and the other fluted; then the thread passed between successively faster rollers, revolving three, four, or more times quicker than the first pair, according to the fineness wanted, in much less time than required by hand-spinning. The spinning machine operated also with much more exactness.
The next important step was John Kay’s 1733 fly-shuttle, which moved much more swiftly and easily, enabling weavers to work twice as fast. Weavers he expected to benefit from this rejected it, though, fearing unemployment. They mobbed his house, nearly killing him. He said he’d more inventions, but “the reason that I have not put them forward is the bad treatment that I had from woollen and cotton factories, in different parts of England…” Weavers dependency on yarn spinners meant that, until further improvements were made in spinning, Kay’s innovation was of limited benefit.
About 1764 James Hargreaves developed a multiple spinning machine, conceived when observing a spinning wheel accidentally overturned. He envisioned many spindles turning, and constructed a machine for one individual to spin several threads at a time. After he began selling the machines, hand spinners feared unemployment and broke into his house. They destroyed a number of jennies, causing Hargreaves to relocate. His machine was superseded by Richard Arkwright’s spinning frame, or water frame (so-called because it operated by waterpower), which produced cotton yarn suitable for warp in manner quite similar to Wyatt’s. Hargreaves’ spinning jenny thread lacked the strength of Arkwright’s yarn. With partners, Arkwright opened factories, within a few years with machinery for carrying out all phases of textile manufacturing, from carding to spinning. In 1769 he got a patent “for the making of yarn from cotton, flax, or wool,” in which Wyatt’s principle was adopted, and improved on. In 1771 he established a successful mill; in 1775 he applied for, and received, a second patent, “for carding, drawing, and roving machines, in preparing silk, cotton, flax, and wool for spinning” – but it was withdrawn, on grounds of prior production. He took ideas for his machines from others, but was better able to build machines and to make them work, and so got knighted.
Samuel Crompton, who worked a spinning-jenny, heard of Arkwright’s drawing-machine, and figured his work might be done more easily if he could construct a machine combining the principles of both. With poor tools and materials, he made what he called a spinning mule (it being the offspring of two different kinds of instruments). It permitted large-scale manufacture of high-quality thread and yarn, with just a few boys or girls seeing that no knots or flaws spoiled the finer, firmer, and more uniform threads. Crompton, a plain, honest man, didn’t perceive much of his machine’s value. Having made it for his own use, he was annoyed at the curiosity of marveling neighbors who climbed up to peep in at the window. He built a screen to block the view; but eventually told them that, if fifty of them would subscribe a guinea a-piece, he’d show them his machine and teach them to make ones like it for themselves. Quickly copied by thousands, it was then improved upon by Crompton himself. He showed the machine to manufacturers, hoping they’d buy it, but got only £60. Years later, in 1811, over 360 mills in England used 4,600,000 spinning mule spindles produced about forty million pounds of cotton in a year, giving work to seventy thousand people (plus 150,000 more weaving the yarn). In 1812 Parliament granted Crompton £5,000, which he used to enter business, unsuccessfully, first as a bleacher, then as a cotton merchant and spinner. He died in poverty in 1827.
Edmund Cartwright, a clergyman, saw Sir Richard Arkwright’s cotton-spinning mills in 1784. Inspired to construct a similar machine for weaving, he invented a crude power loom, first patented in 1785. He explained, “As I had never before turned my thoughts to anything mechanical, either in theory or practice, nor had ever seen a loom at work, or knew anything of its construction, you will readily suppose that my first loom was a most rude piece of machinery.” But the principle was there, and though Cartwright’s power-loom, patented in 1785, was of little value. Others adapted the idea, establishing power-looms. Cartwright set up a weaving and spinning factory, but had to surrender it to creditors. In 1789 he patented a wool-combing machine which lowered manufacturing costs, but didn’t benefit him financially until the House of Commons voted Cartwright £10,000 in recognition of benefits conferred on the nation through his power loom, in 1809.
These machines would have been of much less advantage without Scottish James Watt’s steam-engine, patented in 1769. Its first application to textile manufacture was in 1785, when a Nottinghamshire cotton-spinner had one set up in his works. In 1787 three others were started, and from that time they began to be rapidly employed, so rapidly that orders could only barely be filled. The immeasurable advantages of steam-power over water-power gave manufacturers liberty to choose convenient sites, and a much wider field of factory enterprise was opened.

Bengali textiles were of major interest to the British and French from the last quarter of the 17th century, but both the French (1686) and the British (1700) forbade import of printed and painted cottons to protect domestic textile producers. Both countries imported from Bengal for re–export (and much was subsequently smuggled back into England). The Dutch didn’t protect their textile industry, and ended up marketing a large part of French and somewhat less of the British re–exports of Indian textiles, within Europe. The volume of Dutch foreign trade dropped 20% from 1720 to 1820.
From the 1760s, there was spectacular growth in the cotton textile industry. Demand for cotton clothing and household furnishings had been nurtured by a century and a half of imports from India. The prospects and profitability for domestic expansion were transformed by a wave of technological innovation. Cotton was much easier to manipulate mechanically than wool, and mechanization had a dramatic impact on labor productivity. When American Eli Whitney invented the cotton gin in 1793, the cost of raw cotton imported from America was substantially reduced.
French textiles advanced in design and technique under Louis XVI (1774–93); classical elements intermingled with the earlier floral patterns. In the 1790s the French Revolution interrupted the weavers of Lyon, but the industry soon recovered. The Low Countries were annexed by revolutionary France in 1795; in 1814 they were reunited as the independent Kingdom of The Netherlands.

But the greatest change of all in weaving was Joseph Marie Jacquard’s. A humble silk-weaver by birth, at 10 years old, Jacquard worked as a draw-boy - a repetitive, unpleasant job. He later worked as a bookbinder, type founder and cutter. One story says he saw in an English newspaper ad offering a prize for invention of a plan for weaving nets by machinery. For his own amusement, he produced a loom for that purpose; but made no attempt to obtain the reward. After showing his invention to a friend, he put it aside. Then one day he was sent for by the prefect of the department (administrative district), who asked him to make a new one, the original ether lost or destroyed. He did, and a few weeks later was summoned to Paris and introduced to Napoleon Bonaparte. A minister asked if he was the man “who pretends to do what God Almighty cannot do, tie a knot in a stretched string?” Jacquard answered, he could do, not what God couldn’t, but what God had taught him to. Napoleon rewarded him with a pension of a thousand crowns, gave him employment in the Conservatoire des Arts (where he found Vaucanson’s loom), and later encouraged the adoption of the excellent Jacquard punch-card loom, the “Jacquard Mechanism”.
Another story has Jacquard provoked by the dullness of weaving into designing a machine that made his job obsolete. When he inherited two looms, he was able to rework them, using inventions by Frenchmen Basile Bouchon, Jean Falcon and Jacques Vaucanson to devise a series of string-connected perforated cards which passed needles pressed against them; the ‘data’ cards held a ‘program’ that chose threads, in particular individual warp yarns. The interchangeable punched cards of ‘Jacquard looms’ controlled the loom, so that any desired pattern could be obtained automatically.

But his new looms were publicly destroyed by official conservators of the trade of Lyons, who pronounced Jacquard worthy only of hatred and ignominy. Jacquard died in 1834, hardly living long enough to see the fruits of his invention. It was first generally adopted, in fact, in Saint-Étienne (Auxerre, east-central France, on the Yonne River), the chief rival of Lyons; in England it was soon used extensively in other manufactures besides that of silk, the first one set up in Coventry in 1820.
No other manufacturing invention has nearly as much aided those concerned in the production of textile fabrics, and in 1838 that town and its adjoining villages had 2,228, and the number continued to increase till 1858, when they began to be replaced by the a-la-bar looms, invented at St. Etienne. Though mechanical appliances didn’t find favor with French weavers, their zeal in cultivating artistic tastes, and rendering their wares more attractive than those produced in other countries, kept them competitive.
Jacquard’s thick pasteboard cards had rectangular holes punched in them – either hole, or no hole, making a binary. Multiple rows of holes are punched on each card, and the many cards that compose the design of the textile are strung together in order. Linked by chains, with small sensing pins detecting the presence or absence of holes, the binaries cause one color thread or another to be picked up, according to the hole’s row position, card order and corresponding order of the actual colored threads. When a hole in a card passes a sensing needle, it goes through, activating its specific threading hook (‘Bolus’ hook), which is either up or down, and one needle or another takes a specific thread through the weaving. Then, in turn, the harness is raised or lowered, placing the warp threads so that the weft will lie either above or below. Thus the pattern is formed, and the draw-boy job made unnecessary. The pattern of holes on the cards, each row corresponding to a row of the design, determines the resulting textile pattern, and not only allows patterns to be utilized again and again, but practically eliminates human error.
Jacquard’s loom, claimed as the first machine to use punched cards, enabled ordinary workers to replicate beautiful patterns in a style previously accomplished only with patience and skill. His idea of punch cards was revolutionary in that it used the idea of a machine having ability to follow an algorithm, and because the cards stored information: an ability which helped spark the computer revolution. Jacquard’s punch-card system proved so useful, it was incorporated into ideas for most computer systems – which even now still use binaries.
Jacquard completed his loom in 1801, and in 1803 he was awarded a lifetime pension from Napoleon. Many master-weavers, though, feared their jobs would become obsolete, since complicated patterns could now be produced by less skilled weavers. The knowledge of an expert weaver could be stored on paper, ready for anyone to use. It was in protest over this that numerous Jacquard looms were destroyed, and threats were made on Joseph-Marie’s person – as had happened to John Kay and James Hargreaves, prefiguring the peasant mob in Mary Shelley’s Frankenstein, going after the mad scientist-inventor (‘though first use of the term ‘scientist’ was still a couple decades off).
The Jacquard loom raised productivity and reduced costs, increasing business, though; most high quality tapestry reproduction is still woven on jacquard looms of similar design, just as the same system of binary on and off coding is used in computers, and in other applications. Computer guided looms are excellent for reproducing detail, and for making elegant, many-colored motifs.

On seeing Jacquard’s punched-card system, mathematician Charles Babbage (1791-1871) was inspired to use the same principles to design a mechanical calculating machine, the forerunner of modern computers, which he called the “Analytical Engine”. Although his first machine, the Difference Engine, wasn’t yet finished, in 1834 Babbage made plans for a new kind of calculating machine, an Analytical Engine.
Babbage, who invented the cowcatcher, assisted in postal reform, establishing the modern British postal system, was a pioneer in the fields of operations research and actuarial science (he compiled the first reliable actuarial tables), pioneered lighthouse signaling and was the first to suggest that weather of years past could be told from tree rings. He wrote about factories, and not only was his highly original discussion of division of labor followed by John Stuart Mill, his discussion of the effects of development in production technology on the size of factories was taken up by Karl Marx. It became fundamental to Marxist theory regarding capitalist socio-economic development.
In recognition of the high error rate in calculation of mathematical tables, Babbage wanted to find a method to calculate them mechanically, removing human sources of error. Tables then in use contained many errors, life-threatening for sailors; Babbage argued that by automating navigation table production, he’d assure their accuracy. Three major factors influenced him: a dislike of untidiness, experience working on logarithmic tables, and the work on calculating machines by Wilhelm Schickard, Blaise Pascal and Gottfried Leibniz. Babbage presented a paper, ‘On the Theoretical Principles of the Machinery for Calculating Tables’, to the Royal Society of London, and won its first Gold Medal, in 1823. He thus gained support for his Difference Engine, obtaining one of the world’s first government grants for research and technological development. When he tried to get further funding for another machine, his Analytical Engine, though, Parliament refused, pointing out that the first machine was yet unfinished. Babbage found sympathy for his new project abroad; in 1842, Italian mathematician Louis Menebrea published a memoir in French on the subject of the Analytical Engine. Babbage enlisted Ada Byron King as translator for the memoir, to which she appended notes which became the source of enduring fame.

Augusta Ada Byron King, Lady Lovelace (1815-1852) was but five weeks old when her mother, mathematically inclined Anne Millbanke, left her brief marriage with poet Lord George Gordon Byron. Anne Millbanke encouraged Ada’s mathematical and scientific interests, hoping to suppress any inclination to wildness inherited from her father. Tutored in advanced studies by mathematician-logician Augustus De Morgan, the first professor of mathematics at the University of London, in 1835 Ada married William King, the 8th Baron King; he was made an Earl in 1838 and Ada became Countess of Lovelace.
In addition to writing programs for Babbage’s machine, she also foresaw the capability of computers to go beyond mere calculating or number-crunching - to manipulate symbols according to rules. Others, including Babbage himself, focused only on number-crunching capabilities. Ada King was one of the few who fully understood Babbage’s ideas; had the Analytical Engine actually been built, her program would have been able to calculate a sequence of Bernoulli numbers, which have to do with a theory of permutations, combinations and probability.
In 1843, she published the definitive paper on Babbage’s Analytical Engine, explaining that it operated on general symbols rather than on numbers, and established “a link… between the operations of matter and the abstract mental processes of the most abstract branch of mathematical science”. The Analytical Engine went beyond the bounds of arithmetic and existing calculators, she claimed; it “weaves algebraic patterns, just as the Jacquard-loom weaves flowers and leaves.”
Babbage’s “Engines” were never completed, but Ana King, Lady Lovelace, became expert on the process of sequencing instructions on punched cards which the Analytical Engine could read, and was, thus, the world’s first computer programmer.
The Difference Engine was a calculator, but it had a storage place where data could be held for later processing, and could stamp its output into soft metal usable later to print a plate. When funding for it ran out in 1833; Babbage was working on his more revolutionary Analytical Engine - a general-purpose digital computing machine, program-controlled to perform almost any calculation set before it (some say any calculation, but it could no more deal with “orders of infinity” than has any person or other machine). Its four components: the mill, store, reader, and printer, are the significant components of every computer today. The mill, or calculating unit, paralleled the central processing unit (CPU) of a modern computer; the store kept data for processing, like memory and storage in today’s computers; and the reader and printer dealt with input and output, respectively.
Though steam-driven, the Analytical Engine was designed with more storage capacity than any computer built before 1960 ever had, with an ability to place numbers and instructions temporarily in its store and then return them to its mill for processing at an appropriate time. Its printing capability involved conditional branching, an evaluative ability in its conditional control transfer, whereby it could jump to a different instruction, depending on the relative significance of entered data. This impressively useful feature was missing in later computers of the early to mid 20th century.
The Analytical Engine would have been a real computer as understood today, but Babbage ran into implementation problems again. His ambitious design was judged infeasible given current technology, and failure to generate promised mathematical tables with the Difference Engine curtailed funding. Babbage, more interested in innovation than in constructing the tables that were his work’s original raison d’etre, and which the government placed more importance on, never finished his work. All the same, Babbage’s Analytical Engine was a new idea, and the first machine that deserved the name computer. Babbage never completed a machine, in part because Victorian mechanical engineering wasn’t sufficiently developed to produce precision parts.
Swedish printer George Scheutz successfully constructed a machine based on the designs for Babbage’s Difference Engine, in 1854. It printed mathematical, astronomical and actuarial tables with unprecedented accuracy, and was used by the British and American governments. In 1991 a perfectly functioning difference engine was constructed from Babbage’s original plans. Built to tolerances achievable in the 19th century, it proved Babbage’s machine would have worked. But it weighed five tons, and there is no direct line of descent from Babbage’s work to the modern electronic computer invented by pioneers of the electronic age in the late 1930s and early 1940s – people largely ignorant of Babbage’s work.

Societies, or rather, major wielders of power in societies, have long sought to accurately assess population, by census taking. The noun ‘censor’ in English carries three meanings: that of ‘an official responsible for conducting a census’, ‘an official responsible for changing texts, films, etc so that these are in accordance with what is deemed acceptable for public consumption and public morals, and not helpful to enemies of the state’, and something which prevents unacceptable memories, ideas and wishes from coming into consciousness. This seems to have first arisen through the office of Public Censor in Roman society.
Processing census information cost considerably, in both time and money, but there are many reasons for census counts: to determine the proportions of men, women and children; numbers in different occupations, and income – for use assessing tax levies, determining entitlement to representatives in legislative bodies, and planning future policy, but especially to facilitate taxation, military induction &/or corvée (and other) labor, planning about water, fuel and food storage, waste disposal and other matters related to maintenance of power. One interesting kind of census was kept in London since the 1500s, to warn those with wherewithal to do so, to flee the city during plague outbreaks.
When the U.S. population reached 31.4 million, in 1860, it was deemed necessary to place a limit of 100 questions on census forms. Some kind of mechanical assistance was deemed necessary for the next census, and a crude device - little more than a convenient data entry and output tool - was created by Charles Seaton. Then Herman Hollerith of the Census Office, with John Shaw Billings, who was involved in statistical analysis of 1880 returns, perfected a system for encoding census returns onto punched cards – cards quite like those of the jacquard loom. The new machinery could process the cards to tally various totals; success led to the idea being copied (by companies keen to make money from lucrative census contracts), but in 1896 Hollerith incorporated the Tabulating Machine Company to manufacture his machines. Incorporated in 1911 as the Computing-Tabulating-Recording Company, it became the leading American manufacturer of punch-card tabulating systems used by governments and private businesses, and eventually International Business Machines (IBM), the predominant marketer of computers through the 60s and 70s. Hollerith’s format for storing information remained valuable into the 1960s.
But as taxation, banking, government debt and borrowing against money lent to government (and other credits issued) became more and greater features of predominance in economic life, the tapestry of interconnected finances demanded more and more quickly accessible intricate records keeping. Especially with installments and compound interests, calculations required use of figures referred to repeatedly, and, similarly as with weaving, doing things by hand, or even with basic machinery, became less and less viable as markets, and economies, increased n in size.
The thought processes involved here reflect Leonardo da Vinci (1452 – 1519), Galileo Galilei (1564-1642), Isaac Newton (1642 – 1727), J.S. Bach (1685 – 1750), ETA Hoffman (1776-1822) and many others who influenced our way of life and outlook on the world. The music box (about 1770) grew into the player piano (1897), and eventually the DVD player (1995). The Industrial Revolution wasn’t only about industry, but involved determining connections and dependencies, correspondences and affinities. Life wasn’t as much primarily a local matter anymore, and more and more had to be considered and assimilated into memory or at least an accessible system.

In the late 1930s, Howard Aiken, after working with digital calculating devices, proposed new ideas concerning the structure of an ideal computer: it should represent both negative and positive numbers, perform all standard arithmetic operations, and carry out sequences of operations. Business machines of the time used plug-boards (something like telephone switchboards) to route data manually; Aiken chose not to use them for specification of instructions, making his machine easier to program than much more famous subsequent computers. Combining elements of Babbage’s Analytical Engine, which he’d read about, with use of punched-cards for data-representation (as delineated by Hollerith), Aiken was able to interest IBM in his proposal, and between 1939 and 1944 “Mark 1” was built. It was already out-of-date by the time it became operational - a device at Moore School of Pennsylvania State U. superseded it.
During the time Aiken was doing his significant work, the first special-purpose electronic computer was created by John Atanasoff, physicist and mathematician at Iowa State College (now Iowa State University). Atanastoff required a computer to help solve complex equations on which he was working. He tried modifying a leased IBM tabulating machine, but as the machine was leased, not owned, he couldn’t go far with that. So, he then set about designing his own machine. One night, experiencing difficulty, he decided he wanted a drink. It’s said, perhaps a bit apocryphally, he traveled 200 miles to the state border to get one. Ames, where the university is, though central, is far less than 200 miles from any Iowa border, and Dubuque’s sole brewery to survive Prohibition reopened in 1935. Atanasoff is said to have developed four ideas that led to the modern computer industry in 1937. Be all that as it may, the story goes that after a relaxing drive, and drinks, he thought up the idea of designing a machine to use binaries, and was soon developing capacitors to store data in binary form. He designed and began construction on a large, general-purpose computer, the Atanasoff-Berry Computer, or ABC. By 1939, Atanastoff and graduate student assistant Clifford Berry had a prototype, and were started on the final machine, which used vacuum tubes and electronic logic circuits; for memory it had capacitors on a rotating metal drum - the first computer to use entirely electronic methods for calculation. Atanasoff and Berry’s rotating magnetic drums were a novel storage medium, but input was still through punched cards. The machine was never completed as both Atanasoff and Berry left it to work for the U.S. military, at the onset of WWII.
The ABC’s important influence on computing was through inspiring John Mauchly, a key designer of the best known early computer, the ENIAC (Electronic Numerical Integrator and Calculator). It was started with an initial budget of 61,700 dollars, but total cost by the time a demonstration was held in February 1945 was almost half a million. ENIAC has a strong claim as the first ever general-purpose electronic computer. Its successor, the EDVAC, was finished in 1952. The first commercial computer was the UNIVAC (UNIVersal Automatic Computer), delivered to the US Census Bureau in 1951, by Remmington-Rand. In total 44 of these were sold.

Mechanical duplications of arithmetic processes eventually had to be abandoned for electrically-based switching components, and use of formalisms of binary and Boolean algebra. Faster, more reliable calculating machines were made. Memory went to tape, instead of a metal drum, and computers came to look like those in early James Bond movies. The major technological improvements in computer construction since are mostly the result of smaller, faster and more reliable binary switching devices. By the middle ‘50s, huge machines of unprecedented speed were operational at many US and UK institutions, but, coding of programs was still difficult, time-consuming and error-prone; users faced problems describing operations to be performed, as all had to be done in abstract languages needed by the machines in order to interpret and execute commands.
Programming languages like FORTRAN (‘formula translation’, designed at IBM) were developed, specifically tailored to applications in science and engineering; others, including COBOL (common business oriented language), were intended for data processing and record maintenance systems. These languages began to make computers more accessible to those without detailed technical knowledge of computer operation. LISP (List Processing), developed by John McCarthy and a team at MIT (Massachusetts Institute of Technology) became popular in academic environments, as a language for programming Artificial Intelligence systems; implementations extending the functionality of this language are known as ‘pure’ LISP.
In 1977 the first ‘Personal’ computers arrived, a step up from calculators and typewriters with some memory, for the non-professional. Portable ones came on the market in 1990, and soon people were buying modems and connecting to the Internet, without need of special computer languages. Several thousand individuals work in the Internet Engineering Task Force, a loosely structured nonprofit, international society headquartered in Reston, Virginia, to set Internet standards, while the Internet Corporation for Assigned Names and Numbers (ICANN), also nonprofit and private, oversees Internet domain names and numbers.

In all this we can see development taking place not continuously or even very sequentially, but in leaps and jumps, guided often by individual bursts of insight, but always utilizing the work of others.
Modern computing can be divided into phases: Phase 1 between 1937 and 1950, was “exploration and definition,’ when no two machines were alike and virtually all were done in university or government labs. The huge ENIAC computer did a mathematical model of a hydrogen bomb. Phase 2, 1950 to ‘59, started the commercial computing era, with UNIVAC, IBM, Burroughs, NCR (National Cash Register), Westinghouse, Bendix, Royan/McBee, Philco, General Mills and a few other brands of commercial machines. They didn’t share design specs, and there were few standards. A big seller, relatively, was the IBM 650. In Phase 3 the transistor and Coincident Current Core Memory (required to take advantage of the switching speed of transistors) were incorporated. Suddenly fast, inexpensive and reliable computers were designed; the makers of huge ‘big iron’ computers made even bigger, faster and more reliable ones, for big corporation and governmental data processing. Between 1960 and 1973 a plethora of computer companies were formed… maybe as many as in the first decade of the microcomputer era, the next phase. Computers still required computer language, and there wasn’t much networking or exchange of information between machines – which required the work of a multitude of programmers, but computers had become an important part of modern culture.
In 1971 the first silicon-based microprocessor (microchip) was developed, and Ray Tomlinson sent himself an email between two computers in his office – an early example of e-mail messaging, but not, as attributed, the first. At MIT (Massachusetts Institute of Technology), a kind of e-mail system (“mailbox”) had been introduced in 1965. That system, though, just had terminals connected to a mainframe, with no independent memory storage. Tomlinson picked the @ (for ‘at’)symbol to separate user names from host names, for sending messages from one computer to another, and so is credited with inventing email. Usenet, one of the oldest computer network communications systems (developed at University of North Carolina at Chapel Hill and Duke University in 1980, over a decade before the World Wide Web was introduced) gave the general public its first access to the Internet. The spread of internetworking (the term from which the Internet got its name) soon began to form into a global network, for which standardized protocols were implemented in 1982. The World-Wide-Web, a subset of the greater Internet, was opened to the general public in 1992, and made it possible for pictures and sound to be exchanged. Web-based archiving of Usenet posts, with a large, searchable database, began in 1995, the year America On-Line (AOL) began offering Web services. Later that year, Microsoft released Internet Explorer 1.0. By 1998 there were 750,000 commercial sites on a world wide web. In the late 1990s advertising revenue was the main quest of many Internet sites; some began to offer free or low-cost services of various kinds, augmented with ads. By 2001 this speculative bubble had burst, but the Internet surely will afford new commercial opportunities way beyond pornography sales and auctioneering.
The Internet inspired new software and programming, especially the dividing of computational tasks into subtasks assigned to separate processors. This started in the ‘70s, and allowed increased efficiency and speed, facilitating connectivity for wide-spread sharing information. It became available to the general public early in the 1990s. The World Wide Web disseminates digitized text, pictures and audio and video recordings, but information currently available on it might not be available later, without careful attention to preservation and archiving techniques; the key to making information continuously available is management of infrastructure. Repositories of information, stored as digital objects, abound through the Internet; at first dominated by digital objects specifically created and formatted for the World Wide Web, eventually they’ll use in all kinds in formats. Movement of digital objects from one repository to another can leave them available (to users authorized to access them), while replicated instances in multiple repositories will provide alternatives to those better able to interact with certain parts of the Internet, than others. At the same time, the Internet has brought things to the point where most computers no longer do any actual computing, but serve only as communicative and entertainment devices, no longer contributing to production, as did Jacquard’s punch cards.
A satellite enabled communication and positioning system, the Global Positioning System (GPS), though, is becoming increasingly available, most places, via a special commercial GPS receiver. Used with mapping software, GPS can locate one’s position, help with travel routes and finding places of interest, and allow others to locate you. Also, if not even more exciting are ‘reactive’ systems that can control processors in machinery - from factory production lines to home appliances. These take in data from sensors and make responses, communications occurring through wires, optical fibers, or radio transmissions. Microwave radio also carries computer network signals, generally as part of long-distance telephone systems, but increasingly, for wireless networks within a building (Wi-fi). Wide area networks (WANs) span cities, countries, and the globe, generally using telephone lines and satellite links. The Internet connects multiple WANs, making a network of networks.
Future directions will likely involve increased availability of wireless access: wireless services enable applications not previously possible in any economic fashion. For example, global positioning systems (GPS) combined with wireless Internet access can help mobile users generate precise accident reports and traffic analysis, plus initiate recovery services, and congestion control. In addition to wireless laptop computers and personal digital assistants (PDAs), wearable devices with voice input and special display glasses are under development. It may become possible for users to access networks at speed allowing multiple video streams to occupy only a small fraction of available bandwidth. Remaining bandwidth could be used to transmit auxiliary information about data being sent and received. Integrated broadband systems will simultaneously carry multiple signals - data, voice and video. It may become possible to assign unique addresses to most electronic devices, giving new meaning to “remote control”!
As computer size shrinks and microchips become more and more powerful, with more and more utilizable memory, researchers and entrepreneurs find new possibilities in mobile computing (i.e. miniature keyboards or a stylus and handwriting-recognition software to allow users to write directly on the screen). Much like the first personal computers, PDAs (‘personal digital assistants’; handheld computers) were built without a clear idea of what people would do with them, and offered little advantage over paper-and-pencil planning books.
But a new generation of computers vastly smaller than those utilizing silicone chips might indeed become possible, with nanotechnology utilizing molecular memory (organic molecule nanogate on-off switches). Almost anything could then become computerized, computer monitored or even controlled from a distance, opening a plethora of new possibilities – even startling new medical possibilities, including a whole new world in diagnostics.
Someday soon, instead of being a dream machine that many desire, microprocessors may be found wherever humans go. The technology could be invisible, and respond to various patterns of behavior, with computers a transparent part of the physical environment…

Labels: , ,