TagMIT

“Artificial Leaf” Converts Sunlight into Energy

artificial leaf

Researchers at MIT have created an “artificial leaf” out of “earth-abundant, inexpensive materials — mostly silicon, cobalt and nickel.” There’s not yet a way to collect and store this energy, but it’s a step:

Researchers led by MIT professor Daniel Nocera have produced something they’re calling an “artificial leaf”: Like living leaves, the device can turn the energy of sunlight directly into a chemical fuel that can be stored and used later as an energy source.

The artificial leaf — a silicon solar cell with different catalytic materials bonded onto its two sides — needs no external wires or control circuits to operate. Simply placed in a container of water and exposed to sunlight, it quickly begins to generate streams of bubbles: oxygen bubbles from one side and hydrogen bubbles from the other. If placed in a container that has a barrier to separate the two sides, the two streams of bubbles can be collected and stored, and used later to deliver power: for example, by feeding them into a fuel cell that combines them once again into water while delivering an electric current.

MIT News: ‘Artificial leaf’ makes fuel from sunlight

A factory of one’s own

According to MIT’s Neil Gershenfeld, the digital revolution is over, and the good guys won. The next big change will be about manufacturing. Anyone with a PC will be able to build anything just by hitting ‘print.’

(Fortune Magazine) — Imagine a machine with the ability to manufacture anything. Now imagine that machine in your living room. What would you build first? Would you start a business? Would you ever buy anything retail again? According to MIT physicist Neil Gershenfeld, it’s not too early to think about these questions, because that machine, which he calls a personal fabricator, is not so far off – or so far-fetched – as you might think.

Gershenfeld is director of MIT’s Center for Bits and Atoms (CBA), an interdisciplinary outfit studying the intersection between information theory and industrial design. He also teaches a course called How to Make (Almost) Anything.

Five years ago the National Science Foundation awarded the CBA $14 million to build a manufacturing lab full of futuristic hardware. That includes a nanobeam writer that can etch microscopic patterns on metal, and a supersonic waterjet cutter that generates 60,000 pounds of water pressure, enough to shear through almost any material. The CBA factory can churn out anything, from the tiniest semiconductor to an entire building.

continue reading via money.cnn.com

Grow Your Own House

plant your own house

Those familiar with Paul Laffoley will be excited to see this project:

The Fab Tree Hab — a home literally made from trees, using an ancient technique called pleaching (the art of weaving (and sometimes grafting) trees together to form structures) — was one of the design entries for the Index: awards, emerging from the genius of a crew including MIT architect Mitchell Joachim and our friend, Javier Arbona of Archinect. The project description emphasized consideration of whole systems (and ecosystems) in creating a truly sustainable built environment, rather than a piecemeal approach that could yield uncertain longterm outcomes.

World Changing: Grow Your Own Treehouse and other thoughts on Ecological Architecture

See also: Influences on Archinode’s Fab Tree Hab

MIT Lists History’s Top 10 Technological Failures

MIT’s Tech Review picked its top 10 worst technological disasters:

Many of the factors that make them go spectacularly wrong are surprisingly consistent: impatient clients who won’t hear “no”; shady or lazy designers who cut corners; excess confidence in glamorous new technologies; and, of course, good old-fashioned hubris.

MIT Technology Review: 10 Technology Disasters

(via Plastic)

Encryption

As government surveillance increases, many people are turning to encryption to protect their privacy. After the 9/11 attacks, many governments have expanded their surveillance powers, including the United States, Canada and the United Kingdom. Snoopers may not understand encrypted communications.

Encryption codes a message so that it cannot be understood by anyone other than the intended recipient. This can be done by talking in code over the telephone or by mathematically encrypting data over the Internet. Strong encryption usually refers to virtually unbreakable military-strength data encryption. It is used outside of the military primarily for private messaging, securing purchases online, online identity verification, and transmitting sensitive doctor-patient information.

PGP (Pretty Good Privacy) is the standard for Internet encryption. PGP works by creating both a public key and a private key. The public key is available to anyone, while the private key is kept a secret. The public key is used to encrypt a message and the private key is then used to decode it. PGP’s security comes from the difficulty in factoring very large numbers. Until a more efficient way to factor numbers is found, cracking a PGP encrypted message is virtually impossible. It is frequently pointed out that ‘pretty good’ is an understatement about the privacy offered by PGP. The only way an outside party could decrypt a message would be to somehow acquire the private key from the user or try every possible key (which would take about 100 million years with modern technology according to MIT mathematician Roger Schroeppel). For more information on PGP security read the PGP Attacks FAQ.

New Legislative Powers

In the United Kingdom the Regulation of Investigatory Powers Act (RIP) of 2000 makes it a crime to withhold encryption keys from the government (punishable by up to seven years jail). The United States has a history of trying to limit civilian use of military-strength encryption. Legislation was proposed to require government back doors be built into encryption software during the Clinton administration. These proposals failed due to commercial opposition and protests that encryption bans simply would not work. Public outrage over post-9/11 legislation, ostensibly for “homeland defense”, has created greater awareness of encryption techniques. Government and law enforcement agencies, consequently, have a renewed interest in limiting access of encryption to the general public.

Encryption’s opponents contend that sacrificing some privacy is necessary to insure national security. “[Encryption makers] have as much at risk as we have at risk as a nation, and they should understand that as a matter of citizenship, they have an obligation [to provide the government back door access to encryption products],” Sen. Judd Gregg (R-New Hampshire) said in a floor speech after the 9/11 attacks. Gregg was pursuing legislation that would require government backdoors to be built into all encryption software, but suddenly changed his mind according to Wired News.

The Clipper Chip

Strong encryption’s security is compromised by the backdoor system proposed during the mid-1990s. The system, known as the Clipper Chip would transmit keys to law enforcement agencies so that they could acquire keys to unlock encrypted messages. Unfortunately, when the government’s copy of a key is transmitted to “key banks” it risks being intercepted. Additionally, key banks themselves could become targets of terrorist hackers. See the Clipper section of the RSA’s Cryptography FAQ for more information. The material that terrorists could possibly intercept through government backdoors includes credit card numbers that could be used to fund terrorist acts and personal information that could be used for identity theft. “Having a good, strong crypto infrastructure in our country is part of what we need to combat terrorism,” PGP creator Philip Zimmermann told Reuters news agency.

In addition to the security issues presented by government backdoors is the question as to whether backdoors would do any good for law enforcement agencies. “. . . It [a law banning strong crypto] doesn’t prevent terrorists from getting their crypto from somewhere else,” James Lewis (director for the Technology and Public Policy Program at the Center for Strategic and International Studies, Washington DC) pointed out in a Zdnet News interview.

DoJ v Zimmerman and PGP

The controversy began in 1991 when Philip Zimmerman created PGP. The software was capable of encrypting files and e-mails through the use of state of the art patented encryption algorithms. Zimmerman’s friend Kelly Goen distributed the software by uploading it from his laptop to various Internet newsgroups and dial-up bulletin board systems from pay phones with an acoustic coupler. Steve Levy’s book Crypto (New York: Penguin Putnam, 2001) reveals that Goen was very caught up in the drama of distributing the software. Levy quotes computer activist Jim Warren saying Goen “. . . wanted to get as many copies scattered as widely as possible around the nation before the government could get an injunction to stop him.”

Even though Goen was careful to only upload the software to US-based software, Zimmerman spent the next five years involved in a legal battle with the US Department of Justice for violating export regulations on encryption software. In spite of this (or because of it) PGP became the standard for encrypting electronic data. In 1996 the Justice Department dropped the case and PGP was sold to Network Associates who is trying to sell the rights to another company.

PGP is available for all major operating systems and is easy to use. It has also spawned a non-patented clone called GPG (Gnu Privacy Guard). Zimmerman now working for HushMail, a free Web-based e-mail service with built-in PGP encryption.

Encryption: A Guide to Possibilities

If backdoors in software or RIP-esque key on demand laws become an international standard, there are ways to get around them. One-time pads and deniable encryption such as steganography would still be able ensure privacy.

Rubberhose: Rubberhose is a UNIX-clone software package from the United Kingdom. Rubberhose allows users to hide data on their hard drives. According to the Rubberhose site: “If someone grabs your Rubberhose-encrypted hard drive, he or she will know there is encrypted material on it, but not how much — thus allowing you to hide the existence of some of your data.” This is advantageous in the RIP-model. If a corrupt government seizes a hard drive, it would be possible for the user to only give away the keys to certain non-offensive data (such as a file named “Mom’s Secret Cookie Recipe”). Of course, this would be of little use in the backdoor model because use of encryption without backdoors would be illegal.

Steganography: Steganography is the practice of secretly embedding data into other data so that it doesn’t appear that communication has occurred. This could be done non-technically, for example, by using code words in the classified ads section of a newspaper. Software such as OutGuess hides messages in seemingly random portions of other files such as images or sounds. According to the OutGuess site: “OutGuess preserves statistics based on frequency counts. As a result, no known statistical test is able to detect the presence of steganographic content.” The drawback is that the recipient must have a key to unlock the hidden information, and that key must somehow be transmitted. One of the major advantages is that a message can be posted in public if the recipient knows what to look for, thus making it difficult for others to detect that communication has even occurred. Your recipient could agree, for example, to communicate through popular files on the Gnutella network. Imagine a group of hackers communicating through Britney Spears publicity photos.

One-time Pads: One-time pads are a form of un-breakable encryption through the use of random numbers. In a plain text message, a different random number represents each character each time it is used. Only someone with the key can decipher it because all possible values for the random numbers are equal. The only way to break this code would be to acquire a copy of the key. The problem is that two parties communicating through this method must have a secure way to transmit keys. The other problem is that the key can be longer than the message itself. The advantage to this method is that it does not require a computer, only a way to generate random numbers.

Whether it’s an embarrassing note about your sex life or your secret recipe for banana pudding, everyone has something they would rather other people not see. The recent increases in government-permitted surveillance make encryption useful to everyone, not just paranoid nuts.

More:

PGP International The home of Pretty Good Privacy, the de-facto standard for Internet-enabled digital encryption. Features news, manuals and downloads.

Electronic Frontier Foundation “The Electronic Frontier Foundation (EFF) was created to defend our rights to think, speak, and share our ideas, thoughts, and needs using new technologies, such as the Internet and the World Wide Web. EFF is the first to identify threats to our basic rights online and to advocate on behalf of free expression in the digital age.”

Philip Zimmerman Philip Zimmerman created PGP. This site includes his PGP writings, Senate testimony, news, consultancy services and an extensive links collection.

RSA Cryptography FAQ RSA Laboratories have created an extensive FAQ on cryptography’s history, the major cryptosystems, techniques and applications, and real-world cases. Highly recommended.

One-time Pad FAQ A quick guide to one-time pads, explaining how this cryptosystem works, distribution methods and sources of randomness.

GnuPG An open source encryption standard. The site includes an extensive FAQ, the GNU Privacy Handbook and more. “GnuPG stands for GNU Privacy Guard and is GNU’s tool for secure communication and data storage. It can be used to encrypt data and to create digital signatures. It includes an advanced key management facility and is compliant with the proposed OpenPGP Internet standard as described in RFC 2440.”

HushMail Free encrypted Web-based e-mail. “HushMail eliminates the risk of leaving unencrypted files on Web servers. HushMail messages, and their attachments, are encrypted using OpenPGP standard algorithms.”

Freenet Project Freenet is a peer-to-peer (P2P) publishing network that enables you to publish encrypted documents. Ian Clarke’s system has been used by grassroots political groups and individuals to publish controversial information.

Rubberhose “Rubberhose transparently and deniably encrypts disk data, minimising the effectiveness of warrants, coersive interrogations and other compulsive mechanims, such as U.K RIP legislation. Rubberhose differs from conventional disk encryption systems in that it has an advanced modular architecture, self-test suite, is more secure, portable, utilises information hiding (steganography/deniable cryptography), works with any file system and has source freely available.” [Update: Interesting historical sidenote, this now discontinued project was created by Julian Assange, see also: Wikipedia entry for Ruberhose]

OutGuess “OutGuess is a universal steganographic tool that allows the insertion of hidden information into the redundant bits of data sources. The nature of the data source is irrelevant to the core of OutGuess. The program relies on data specific handlers that will extract redundant bits and write them back after modification. In this version the PNM and JPEG image formats are supported.”

(This article originally appeared at http://www.disinfo.com/archive/pages/dossier/id2007/pg1/ January 31, 2002)

Rivalino Is in Here: Robotic Revolt and the Future Enslavement of Humanity

Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.

A Brief History of Artificial Intelligence

In 1941 a new invention that would one day revolutionize virtually every aspect of society was developed. Electronic computers were unveiled in both the United States and Germany. They were large, bulky units that required gargantuan air-conditioned rooms. They were a programmers nightmare, requiring the separate configuration of thousands of wires to get a program to run.

Eight years later, in 1949, the stored program computer was developed, making the task of programming simpler. Advancements in computer theory began the field of computer science and soon thereafter Artificial intelligence.< The invention of this electronic means of processing data created a medium that made man-made intelligence a possibility. And while this new technology made it possible, the link between human intelligence and machine intelligence was not fully observed until the 1950's. One of the first Americans to make the observation on the principles of feedback theory was Nobert Wiener, which was influential to the development of early Artificial intelligence. In 1955 the Logic Theorist was developed by Newell and Simon, considered by many people to be the first functional AI program. The Logic Theorist would attempt to solve problems according to a tree model, selecting the branch which would most likely result in a correct answer. It was a stepping stone in the development of the AI field. A year later John McCarthy, who has come to be regarded as the father of AI, organized a gathering in Vermont which became known as the Dartmouth Conference. From that point on the field of study became known as Artificial intelligence. And while the conference in itself was not an overall success, it did bring the founders of AI together and laid the foundations of future AI research. AI began to pick up momentum in the years following. While the field remained undefined, ideas were re-examined and built at AI research centers at Carnegie Mellon and MIT. New challenges were found and studied, including research on systems that could efficiently problem-solve by a limiting search, similar to the Logic Theorist. Another challenge was making a system that could learn by itself. In 1957 the General Problem Solver (GPS) was first tested. The program was developed by Newell and Simon, who had earlier success with the Logic Theorist. As an extension of Wiener's feedback principle the GPS was capable of solving common sense problems to a far greater extent than the predecessor programs. A year later John McCarthy announced his new creation to the world - The LISP language (short for LISt Processing). It was adopted as the language of choice among most AI developers and remains in use to this day. MIT received a 2.2 million dollar grant from the US Department of Defense's Advanced research projects Agency (ARPA) to fund experiments involving AI. The grant was made to ensure that the US could stay ahead of the Soviet Union in technological advancements and serve to increase the pace of development in AI by drawing computer scientists from around the world. SHRDLU was written by Terry Winograd at the MIT Artificial Intelligence Laboratory in 1968-1970. It carried on a simple dialog with a user, via a teletype, about a small world of objects (the BLOCKS world) shown on an early display screen. Winograd's dissertation, issued as MIT AI Technical Report 235, Feb. 1971 with the title Procedures as a Representation for Data in the Computer Program for Understanding Natural Language, describes SHRDLU in greater detail. Other programs which were developed in this period include STUDENT, an algebra solver, and SIR, which understood simple English sentences. These programs helped refine language comprehension and logic in AI programs. The development of the expert system, which predict the probability of a solution under set conditions, aided in the advancement of AI research. During the 1970's new methods for testing AI programs were utilized, notably the Minsky frames theory. David Marr proposed new theories about machine vision and the PROLOGUE language was developed during this time. As the 1980's came to pass, AI was moving at an even faster pace and making it's way into the corporate sector. Since IBM had contracted a research team in the years following the release of GPS, it was only logical that a continued expansion into the corporate world would eventually happen. In 1986 US saled of AI-related hardware and software reached $425 million. Companies the likes of Digital Electronics were using the XCON, an expert system designed to program the large VAX computer systems. DuPont, General Motors, and Boeing utilized expert systems heavily. Teknowledge and Intellicorp formed, helping fill the demand for expert systems by specializing in creating software specifically to aid in the production of expert systems. It was in the years following this boom that computers were first beginning to seep into private use, outside the laboratory settings. The personal computer made it's debut in this period. Fuzzy logic, pioneered in the US, had the unique ability to make decisions under uncertain conditions. New technology was being developed in Japan during this period which aided the development of AI research. Neural networks were being considered as a possible means of achieving Artificial intelligence. The military put AI based hardware to vigorous testing during the war with Iraq. AI-based technology was used in missile systems, heads-up-displays and various other technologies. AI began to make the transition into the home during this period, with the popularity of AI computers growing. Applications such as voice and character recognition were made available to the public. Artificial Intelligence has and will continue to affect our lives. Do Intelligent Machines Dream of Global Conquest?

While beneficial in the past, can we be so sure that this impact will remain positive for us in the future, as AI becomes more sophisticated?

Recently Stephen Hawkings, the renowned physicist, warned that if humans hope to compete with the rising tide of Artificial intelligence they will have to improve themselves through genetic engineering. Which seems amusing, at first, but there are several who agree with Hawkings observations.

Intelligent machines could replace the need for menial labor on our parts while massively increasing production. They could overwhelm us with all forms of intellectual problems, artistic pursuits and new spiritual debate. This seems well and good, of course. There are many who would welcome such an advancement in that scenario.

However, the danger alluded to by Hawkings is that these intelligent machines could run amok, enslaving or attempting to replace humanity.

A Brief History of Genetic Engineering

It was in the Neolithic age that people began to save the seeds of the best specimens for the next planting, the domestication and breeding of animals, and the use of bacteria in the fermentation of food and beverages. The Neolithic Age, in many respects, is the beginning of genetic engineering as we know it.

In 1866 a Czech monk studies peas through several generations and made his postulations on the inheritance of biological characteristics in the species. His name is Gregor Mendel and while his ideas are revolutionary, they are not widely appreciated for some four decades after they publication. It is in 1903 that the American biologist William Sutton proposes genes are located on chromosomes, which have been identified through a microscope.

Eight years later Danish biologist William Johanssen devises the term “gene” and distinguishes genotypes (genetic composition) from phenotypes (open to influence from the environment). Biologist Charles B. Davenport, head of the US Eugenics Record Office in NY, publishes a book advising eugenic practices, based on evidence that undesirable characteristics such as “pauperism” and “shiftlessness” are inherited traits. The eugenics movement becomes popular in the US and Northern Europe over the next three decades, until Nazism dawns and the effects of a fully functional eugenics program as seen for the first time.

In 1922 the American geneticist Thomas H. Morgan and his colleagues devise a technique to map genes and prepare to make a gene map of the fruit fly chromosomes. 22 years later Oswald Avery and colleagues at the Rockefeller Institute are about to demonstrate that genes are composed of deoxyribonucleic acid (DNA). During the same time Erwin Schrodinger publishes the classic “What is Life?” which ponders the complexities of biology and suggests that chemical reactions don’t tell the entire story.

In 1953 Francis Crick and James Watson, working at the Molecular Biology Laboratory at Cambridge, explain the double-helix structure of DNA. In 1971 Stanley Cohen of Stanford University and Herbert Boyer of the University of California in San Francisco develop the initial techniques for recombinant-DNA technologies. They publish the paper in 1973, and apply for a patent on the technologies a year later. Boyer goes on to become a co-founder in Genentech, Inc., which becomes the first firm to exploit rDNA technologies by making recombinant insulin.

In 1980 the US Supreme Court rules that recombinant microorganisms can be patented in the ground-breaking Diamond vs. Chakrabarty case, which involved a bacterium that is engineered to break down the components of oil. The microorganism is never used to clean up oil spills over concern over it’s uncontrollable release into the environment. In the same year the first Genentech public stock offering sets a Wall Street record.

A year later the first monoclonal antibody diagnostic kits are approved for sale in America. The first automatic gene synthesizer is also marketed. In 1982 the first rDNA animal vaccine is approved for use in Europe while the first rDNA pharmaceutical product, insulin, is approved for use in the United States. This same year the first successful cross-species transfer of a gene occurs when a human growth gene is inserted into a lab mouse and the first transgenic planet is grown.

In 1985 we see the first environmental release of genetically engineered microorganisms in the United States, despite controversy and heated debate over the issue. The so-called ice-minus bacteria is intended to protect crops from frost. In the same year the US declares that genetically engineered plants may be patented.

Transgenic pigs are produced in 1986 by inserting human growth hormone genes into pig embryos. The US Department of Agriculture experiment in Beltsville, Md., produces deformed and arthritic pigs. Two die before maturity and a third is never able to stand up.

In 1988 the first genetically engineered organism is approved for sale in Australia. Oncomouse, a mouse that was engineered to develop breast cancer by scientists at Harvard University with funding from DuPont, obtains a U.S. patent but is never patented in Europe. Many other types of transgenic mice are soon created. The Human Genome Project begins later in the year, whilst a German court stops the Hoechst pharmaceutical company from producing genetically engineered insulin after public protest over the issue.

In the 1990’s it is Cary Mullis’s discovery of PCR and the development of automated sequencers that greatly enhances research of genetics, becoming the warp drive for the age of molecular biology. Bioinformatics, proteomics and the attempts at developing a mathematics (and computers capable) of determining protein folding will forever revolutionize the discovery of drugs and the development of novel proteins. New techniques like real time PCR and micro arrays can speak volumes of the level of genetic expression within a cell. Massive computers are being used to predict correlations between genotype and phenotype and the interaction between genes and environment.

These recent developments in molecular genetics can, if used properly, marshall in a new age of evolution: one aided by genotyping and understanding what phenotypes these correspond to.

The Protest Against Genetic Modification

The argument against what could easily have been deemed “mad science” just decades ago is that genetically modified foods are unsafe for consumption as we do not yet know the long-term effects they will have on us or our ecosystem. From transgenic crops to animals, a growing opposition force has demanded that there be protections for citizens who have no desire to consume these unnatural products. The term biospiracy has been conjured up to distinctly brand conspiracies involving genetic engineering.

Eight multinationals under heavily scrutiny by protesters are Dow, Du Pont, Monsanto, Imperial Chemical Industries, Novartis, Rhone Poulenc, Bayer and Hoechst. The claim is that these companies are funding genetic experiments aimed at engineering food seeds which would allow food supplies growing on farmland to accept higher doses of herbicides without dying. The fear is that this practice will load the soil and our bodies with toxic chemicals, all for the profit of megacorporations.

And since this article is going to explain how robots will take over the world if we don’t genetically enhance ourselves, it would be most appropriate that I end this portion of the debate and go off into a rant about the dangers on NOT using genetic modification technologies.

Hoo-Mun Versus Mechanoid

We’ve seen films such as the Terminator portray a future in which intelligent machines have humans on the run. Some fear that this fantastic seeming concept could eventually become a reality.

Computers have, on average, been doubling their performance every 18 months. Our intellect has thus far been unable to keep up with such a staggering rate of development, and as such there is a possibility that the computers could develop an intelligence which would prove dangerous to our human civilization.

The protests against the genetic modification revolution which has begun to take place slow the progress of this research, sometimes grinding experiments to a halt. Be it for spiritual, for safety or even questions about ethics, these protests are managing to stall and delay the development of practical and safe means by which we can advance our own minds and bodies to cope with new environments and new threats to our safety.

Inorganic technology, on the other hand, is embraced with very little question. From cell phones to personal computers, we see these technologies proliferating at an extraordinary rate. The creation of the Internet has allowed this technology to flourish even more so, while also allowing protesters to link together, allowing them to co-ordinate their efforts to stop genetic engineering from moving forward at the same pace as other technologies.

Some might claim that the machines have a hidden agenda, that there already is an intelligent machine out there, directing traffic, infinitely patient and connected to the world. One might allege that these protesters are merely the pawns of a conspiracy which they themselves do not fully understand, a conspiracy by machines, for machines… against humanity.

Then again, that’s just whacko.

However, if there’s even the remotest possibility, you can bet…

Rivalino will be in there.

© 2024 Technoccult

Theme by Anders NorénUp ↑