Applied Systems Intelligence is working on a software package that for “sifting through and analysing existing databases of information, both public and private, and spotting suspicious patterns of activity.” “For example, the system might send an alert if someone tried to buy materials that could be used in bomb making, and booked a large truck and a hotel room near a government office.” Link.
MonthOctober 2001
John C. Lilly passed away on Sunday. He was well known for his pioneering work in neuroscience and interspecies communication.
According to the BBC fusion power is “within reach.” Meanwhile, on a smaller scale but still very exciting is the developement of tiny batteries that convert body heat into electrical power. (Links via Slashdot).
Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.
A Brief History of Artificial Intelligence
In 1941 a new invention that would one day revolutionize virtually every aspect of society was developed. Electronic computers were unveiled in both the United States and Germany. They were large, bulky units that required gargantuan air-conditioned rooms. They were a programmers nightmare, requiring the separate configuration of thousands of wires to get a program to run.
Eight years later, in 1949, the stored program computer was developed, making the task of programming simpler. Advancements in computer theory began the field of computer science and soon thereafter Artificial intelligence.< The invention of this electronic means of processing data created a medium that made man-made intelligence a possibility. And while this new technology made it possible, the link between human intelligence and machine intelligence was not fully observed until the 1950's. One of the first Americans to make the observation on the principles of feedback theory was Nobert Wiener, which was influential to the development of early Artificial intelligence. In 1955 the Logic Theorist was developed by Newell and Simon, considered by many people to be the first functional AI program. The Logic Theorist would attempt to solve problems according to a tree model, selecting the branch which would most likely result in a correct answer. It was a stepping stone in the development of the AI field. A year later John McCarthy, who has come to be regarded as the father of AI, organized a gathering in Vermont which became known as the Dartmouth Conference. From that point on the field of study became known as Artificial intelligence. And while the conference in itself was not an overall success, it did bring the founders of AI together and laid the foundations of future AI research. AI began to pick up momentum in the years following. While the field remained undefined, ideas were re-examined and built at AI research centers at Carnegie Mellon and MIT. New challenges were found and studied, including research on systems that could efficiently problem-solve by a limiting search, similar to the Logic Theorist. Another challenge was making a system that could learn by itself. In 1957 the General Problem Solver (GPS) was first tested. The program was developed by Newell and Simon, who had earlier success with the Logic Theorist. As an extension of Wiener's feedback principle the GPS was capable of solving common sense problems to a far greater extent than the predecessor programs. A year later John McCarthy announced his new creation to the world - The LISP language (short for LISt Processing). It was adopted as the language of choice among most AI developers and remains in use to this day. MIT received a 2.2 million dollar grant from the US Department of Defense's Advanced research projects Agency (ARPA) to fund experiments involving AI. The grant was made to ensure that the US could stay ahead of the Soviet Union in technological advancements and serve to increase the pace of development in AI by drawing computer scientists from around the world. SHRDLU was written by Terry Winograd at the MIT Artificial Intelligence Laboratory in 1968-1970. It carried on a simple dialog with a user, via a teletype, about a small world of objects (the BLOCKS world) shown on an early display screen. Winograd's dissertation, issued as MIT AI Technical Report 235, Feb. 1971 with the title Procedures as a Representation for Data in the Computer Program for Understanding Natural Language, describes SHRDLU in greater detail. Other programs which were developed in this period include STUDENT, an algebra solver, and SIR, which understood simple English sentences. These programs helped refine language comprehension and logic in AI programs. The development of the expert system, which predict the probability of a solution under set conditions, aided in the advancement of AI research. During the 1970's new methods for testing AI programs were utilized, notably the Minsky frames theory. David Marr proposed new theories about machine vision and the PROLOGUE language was developed during this time. As the 1980's came to pass, AI was moving at an even faster pace and making it's way into the corporate sector. Since IBM had contracted a research team in the years following the release of GPS, it was only logical that a continued expansion into the corporate world would eventually happen. In 1986 US saled of AI-related hardware and software reached $425 million. Companies the likes of Digital Electronics were using the XCON, an expert system designed to program the large VAX computer systems. DuPont, General Motors, and Boeing utilized expert systems heavily. Teknowledge and Intellicorp formed, helping fill the demand for expert systems by specializing in creating software specifically to aid in the production of expert systems. It was in the years following this boom that computers were first beginning to seep into private use, outside the laboratory settings. The personal computer made it's debut in this period. Fuzzy logic, pioneered in the US, had the unique ability to make decisions under uncertain conditions. New technology was being developed in Japan during this period which aided the development of AI research. Neural networks were being considered as a possible means of achieving Artificial intelligence. The military put AI based hardware to vigorous testing during the war with Iraq. AI-based technology was used in missile systems, heads-up-displays and various other technologies. AI began to make the transition into the home during this period, with the popularity of AI computers growing. Applications such as voice and character recognition were made available to the public. Artificial Intelligence has and will continue to affect our lives. Do Intelligent Machines Dream of Global Conquest?
While beneficial in the past, can we be so sure that this impact will remain positive for us in the future, as AI becomes more sophisticated?
Recently Stephen Hawkings, the renowned physicist, warned that if humans hope to compete with the rising tide of Artificial intelligence they will have to improve themselves through genetic engineering. Which seems amusing, at first, but there are several who agree with Hawkings observations.
Intelligent machines could replace the need for menial labor on our parts while massively increasing production. They could overwhelm us with all forms of intellectual problems, artistic pursuits and new spiritual debate. This seems well and good, of course. There are many who would welcome such an advancement in that scenario.
However, the danger alluded to by Hawkings is that these intelligent machines could run amok, enslaving or attempting to replace humanity.
A Brief History of Genetic Engineering
It was in the Neolithic age that people began to save the seeds of the best specimens for the next planting, the domestication and breeding of animals, and the use of bacteria in the fermentation of food and beverages. The Neolithic Age, in many respects, is the beginning of genetic engineering as we know it.
In 1866 a Czech monk studies peas through several generations and made his postulations on the inheritance of biological characteristics in the species. His name is Gregor Mendel and while his ideas are revolutionary, they are not widely appreciated for some four decades after they publication. It is in 1903 that the American biologist William Sutton proposes genes are located on chromosomes, which have been identified through a microscope.
Eight years later Danish biologist William Johanssen devises the term “gene” and distinguishes genotypes (genetic composition) from phenotypes (open to influence from the environment). Biologist Charles B. Davenport, head of the US Eugenics Record Office in NY, publishes a book advising eugenic practices, based on evidence that undesirable characteristics such as “pauperism” and “shiftlessness” are inherited traits. The eugenics movement becomes popular in the US and Northern Europe over the next three decades, until Nazism dawns and the effects of a fully functional eugenics program as seen for the first time.
In 1922 the American geneticist Thomas H. Morgan and his colleagues devise a technique to map genes and prepare to make a gene map of the fruit fly chromosomes. 22 years later Oswald Avery and colleagues at the Rockefeller Institute are about to demonstrate that genes are composed of deoxyribonucleic acid (DNA). During the same time Erwin Schrodinger publishes the classic “What is Life?” which ponders the complexities of biology and suggests that chemical reactions don’t tell the entire story.
In 1953 Francis Crick and James Watson, working at the Molecular Biology Laboratory at Cambridge, explain the double-helix structure of DNA. In 1971 Stanley Cohen of Stanford University and Herbert Boyer of the University of California in San Francisco develop the initial techniques for recombinant-DNA technologies. They publish the paper in 1973, and apply for a patent on the technologies a year later. Boyer goes on to become a co-founder in Genentech, Inc., which becomes the first firm to exploit rDNA technologies by making recombinant insulin.
In 1980 the US Supreme Court rules that recombinant microorganisms can be patented in the ground-breaking Diamond vs. Chakrabarty case, which involved a bacterium that is engineered to break down the components of oil. The microorganism is never used to clean up oil spills over concern over it’s uncontrollable release into the environment. In the same year the first Genentech public stock offering sets a Wall Street record.
A year later the first monoclonal antibody diagnostic kits are approved for sale in America. The first automatic gene synthesizer is also marketed. In 1982 the first rDNA animal vaccine is approved for use in Europe while the first rDNA pharmaceutical product, insulin, is approved for use in the United States. This same year the first successful cross-species transfer of a gene occurs when a human growth gene is inserted into a lab mouse and the first transgenic planet is grown.
In 1985 we see the first environmental release of genetically engineered microorganisms in the United States, despite controversy and heated debate over the issue. The so-called ice-minus bacteria is intended to protect crops from frost. In the same year the US declares that genetically engineered plants may be patented.
Transgenic pigs are produced in 1986 by inserting human growth hormone genes into pig embryos. The US Department of Agriculture experiment in Beltsville, Md., produces deformed and arthritic pigs. Two die before maturity and a third is never able to stand up.
In 1988 the first genetically engineered organism is approved for sale in Australia. Oncomouse, a mouse that was engineered to develop breast cancer by scientists at Harvard University with funding from DuPont, obtains a U.S. patent but is never patented in Europe. Many other types of transgenic mice are soon created. The Human Genome Project begins later in the year, whilst a German court stops the Hoechst pharmaceutical company from producing genetically engineered insulin after public protest over the issue.
In the 1990’s it is Cary Mullis’s discovery of PCR and the development of automated sequencers that greatly enhances research of genetics, becoming the warp drive for the age of molecular biology. Bioinformatics, proteomics and the attempts at developing a mathematics (and computers capable) of determining protein folding will forever revolutionize the discovery of drugs and the development of novel proteins. New techniques like real time PCR and micro arrays can speak volumes of the level of genetic expression within a cell. Massive computers are being used to predict correlations between genotype and phenotype and the interaction between genes and environment.
These recent developments in molecular genetics can, if used properly, marshall in a new age of evolution: one aided by genotyping and understanding what phenotypes these correspond to.
The Protest Against Genetic Modification
The argument against what could easily have been deemed “mad science” just decades ago is that genetically modified foods are unsafe for consumption as we do not yet know the long-term effects they will have on us or our ecosystem. From transgenic crops to animals, a growing opposition force has demanded that there be protections for citizens who have no desire to consume these unnatural products. The term biospiracy has been conjured up to distinctly brand conspiracies involving genetic engineering.
Eight multinationals under heavily scrutiny by protesters are Dow, Du Pont, Monsanto, Imperial Chemical Industries, Novartis, Rhone Poulenc, Bayer and Hoechst. The claim is that these companies are funding genetic experiments aimed at engineering food seeds which would allow food supplies growing on farmland to accept higher doses of herbicides without dying. The fear is that this practice will load the soil and our bodies with toxic chemicals, all for the profit of megacorporations.
And since this article is going to explain how robots will take over the world if we don’t genetically enhance ourselves, it would be most appropriate that I end this portion of the debate and go off into a rant about the dangers on NOT using genetic modification technologies.
Hoo-Mun Versus Mechanoid
We’ve seen films such as the Terminator portray a future in which intelligent machines have humans on the run. Some fear that this fantastic seeming concept could eventually become a reality.
Computers have, on average, been doubling their performance every 18 months. Our intellect has thus far been unable to keep up with such a staggering rate of development, and as such there is a possibility that the computers could develop an intelligence which would prove dangerous to our human civilization.
The protests against the genetic modification revolution which has begun to take place slow the progress of this research, sometimes grinding experiments to a halt. Be it for spiritual, for safety or even questions about ethics, these protests are managing to stall and delay the development of practical and safe means by which we can advance our own minds and bodies to cope with new environments and new threats to our safety.
Inorganic technology, on the other hand, is embraced with very little question. From cell phones to personal computers, we see these technologies proliferating at an extraordinary rate. The creation of the Internet has allowed this technology to flourish even more so, while also allowing protesters to link together, allowing them to co-ordinate their efforts to stop genetic engineering from moving forward at the same pace as other technologies.
Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.
Then again, that’s just whacko.
However, if there’s even the remotest possibility, you can bet…
Rivalino will be in there.
The first complete operation carried out by robots controlled by surgeons on the opposite side of the Atlantic has been successful. Surgeons based in New York used the robots to remove the gall bladder of a woman 7000 kilometres away in Strasbourg, France. (Link).
Global Consciousness Project noted that the randomness of data from fundamentally random devices is more random during major events. Apparently, several hours before the plain crashes on Sept. 11 striking anomalies in the data occured. Coincidence? Or evidence of something stange? Link (via Memepool).
Abuddhas Memes is an excellent blog dealing with consciousness, entheogens, memetics, science, philosophy and “big brother.” It has featured such excellent links as How to Live in a Simulation, The LIDA Machine (a Russian device that uses mind altering radiowaves to paralyze people), and The Orion’s Arm World Building Group.
© 2024 Technoccult
Theme by Anders Norén — Up ↑