Tag Archives: computer science

How We Found the Missing Memristor

How We Found the Missing Memristor By R. Stanley Williams

For nearly 150 years, the known fundamental passive circuit elements were limited to the capacitor (discovered in 1745), the resistor (1827), and the inductor (1831). Then, in a brilliant but underappreciated 1971 paper, Leon Chua, a professor of electrical engineering at the University of California, Berkeley, predicted the existence of a fourth fundamental device, which he called a memristor. He proved that memristor behavior could not be duplicated by any circuit built using only the other three elements, which is why the memristor is truly fundamental.

the memristor’s potential goes far beyond instant-on computers to embrace one of the grandest technology challenges: mimicking the functions of a brain. Within a decade, memristors could let us emulate, instead of merely simulate, networks of neurons and synapses. Many research groups have been working toward a brain in silico: IBM’s Blue Brain project, Howard Hughes Medical Institute’s Janelia Farm, and Harvard’s Center for Brain Science are just three. However, even a mouse brain simulation in real time involves solving an astronomical number of coupled partial differential equations. A digital computer capable of coping with this staggering workload would need to be the size of a small city, and powering it would require several dedicated nuclear power plants.

Related: Demystifying the MemristorUnderstanding Computers and the Internet10 Science Facts You Should Know

The Chip That Designs Itself

The chip that designs itself by Clive Davidson , 1998

Adrian Thompson, who works at the university’s Centre for Computational Neuroscience and Robotics, came up with the idea of self-designing circuits while thinking about building neural network chips. A graduate in microelectronics, he joined the centre four years ago to pursue a PhD in neural networks and robotics.

To get the experiment started, he created an initial population of 50 random circuit designs coded as binary strings. The genetic algorithm, running on a standard PC, downloaded each design to the Field Programmable Gate Arrays (FPGA) and tested it with the two tones generated by the PC’s sound card. At first there was almost no evidence of any ability to discriminate between the two tones, so the genetic algorithm simply selected circuits which did not appear to behave entirely randomly. The fittest circuit in the first generation was one that output a steady five-volt signal no matter which tone it heard.

By generation 220 there was some sign of improvement. The fittest circuit could produce an output that mimicked the input – wave forms that corresponded to the 1KHz or 10KHz tones – but not a steady zero or five-volt output.

By generation 650, some evolved circuits gave a steady output to one tone but not the other. It took almost another 1,000 generations to find circuits that could give approximately the right output and another 1,000 to get accurate results. However, there were still some glitches in the results and it took until generation 4,100 for these to disappear. The genetic algorithm was allowed to run for a further 1,000 generations but there were no further changes.

See Adrian Thompson’s home page (Department of Informatics, University of Sussex) for more on evolutionary electronics. Such as Scrubbing away transients and Jiggling around the permanent: Long survival of FPGA systems through evolutionary self-repair:

Mission operation is never interrupted. The repair circuitry is sufficiently small that a pair could mutually repair each other. A minimal evolutionary algorithm is used during permanent fault self-repair. Reliability analysis of the studied case shows the system has a 0.99 probability of surviving 17 times the mean time to local permanent fault arrival. Such a system would be 0.99 probable to survive 100 years with one fault every 6 years.

Very cool.

Related: Evolutionary DesignInvention MachineEvo-Devo

How Large Quantities of Information Change Everything

Scale: How Large Quantities of Information Change Everything

There’s another important downside to scale. When we look at large quantities of information, what we’re really doing is searching for patterns. And being the kind of creatures that we are, and given the nature of the laws of probability, we are going to find patterns. Distinguishing between a real legitimate pattern, and something random that just happens to look like a pattern can be somewhere between difficult and impossible. Using things like Bayesian methods to screen out the false positives can help, but scale means that scientists need to learn new methods – both the new ways of doing things that they couldn’t do before, and the new ways of recognizing when they’ve screwed up.

There’s the nature of scale. Tasks that were once simple have become hard or even impossible, because they can’t be done at scale. Tasks that were once impossible have become easy because scale makes them possible. Scale changes everything.

I discussed related ideas on my Curious Cat Management Improvement blog recently: Does the Data Deluge Make the Scientific Method Obsolete?

Related: Seeing Patterns Where None ExistsMistakes in Experimental Design and InterpretationOptical Illusions and Other Tricks on the BrainData Based Decision Making at Google

von Neumann Architecture and Bottleneck

We each use computers a great deal (like to write this blog and read this blog) but often have little understanding of how a computer actually works. This post gives some details on the inner workings of your computer.
What Your Computer Does While You Wait

People refer to the bottleneck between CPU and memory as the von Neumann bottleneck. Now, the front side bus bandwidth, ~10GB/s, actually looks decent. At that rate, you could read all of 8GB of system memory in less than one second or read 100 bytes in 10ns. Sadly this throughput is a theoretical maximum (unlike most others in the diagram) and cannot be achieved due to delays in the main RAM circuitry.

Sadly the southbridge hosts some truly sluggish performers, for even main memory is blazing fast compared to hard drives. Keeping with the office analogy, waiting for a hard drive seek is like leaving the building to roam the earth for one year and three months. This is why so many workloads are dominated by disk I/O and why database performance can drive off a cliff once the in-memory buffers are exhausted. It is also why plentiful RAM (for buffering) and fast hard drives are so important for overall system performance.

Related: Free Harvard Online Course (MP3s) Understanding Computers and the InternetHow Computers Boot UpThe von Neumann Architecture of Computer SystemsFive Scientists Who Made the Modern World (including John von Neumann)

Demystifying the Memristor

Demystifying the memristor

The memristor — short for memory resistor – could make it possible to develop far more energy-efficient computing systems with memories that retain information even after the power is off, so there’s no wait for the system to boot up after turning the computer on. It may even be possible to create systems with some of the pattern-matching abilities of the human brain.

By providing a mathematical model for the physics of a memristor, the team makes possible for engineers to develop integrated circuit designs that take advantage of its ability to retain information.

“This opens up a whole new door in thinking about how chips could be designed and operated,” Williams says.

Engineers could, for example, develop a new kind of computer memory that would supplement and eventually replace today’s commonly used dynamic random access memory (D-RAM). Computers using conventional D-RAM lack the ability to retain information once they are turned off. When power is restored to a D-RAM-based computer, a slow, energy-consuming “boot-up” process is necessary to retrieve data stored on a magnetic disk required to run the system.

Related: How Computers Boot UpNanotechnology Breakthroughs for Computer ChipsDelaying the Flow of Light on a Silicon ChipSelf-assembling Nanotechnology in Chip Manufacturing

New Supercomputer for Science Research

photo of Jaguar Supercomputer

“Jaguar is one of science’s newest and most formidable tools for advancement in science and engineering,” said Dr. Raymond L. Orbach, DOE.s Under Secretary for Science. The new capability will be added to resources available to science and engineering researchers in the USA.

80 percent of the Leadership Computing Facility resources are allocated through the United States Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program, a competitively selected, peer reviewed process open to researchers from universities, industry, government and non-profit organizations. Scientists and engineers at DOE’s Oak Ridge National Laboratory are finding an increasing variety of uses for the Cray XT system. A recent report identified 10 breakthroughs in U.S. computational science during the past year. Six of the breakthroughs involved research conducted with the Jaguar supercomputer, including a first-of-its-kind simulation of combustion processes that will be used to design more efficient automobile engines. Read the computational science report. Read full press release.

ORNL’s Jaguar fastest computer for science research

Jaguar will be used for studies of global climate change, as well as development of alternative energy sources and other types of scientific problem-solving that previously could not be attempted.

Zacharia said ORNL’s Jaguar was upgraded by adding 200 Cray XT5 cabinets – loaded with AMD quadcore processors and Cray SeaStar interconnects – to the computer’s existing 84 Cray XT4 cabinets. The combined machine resulted in the new standard for computational science.

The peak operating speed is apparently just below that of Los Alamos National Laboratory’s IBM Roadrunner system, which is designed for 1.7 petaflops. But the Jaguar reportedly has triple the memory of Roadrunner and much broader research potential.

Because the Jaguar has come online sooner than expected, Zacharia said an alert was sent to top U.S. scientists inviting them to apply for early access to the Oak Ridge computer. Their scientific proposals will be reviewed on an accelerated timetable, he said.

The peak capability of 1.64 petaflops is attributed to 1.384 petaflops from the new Cray XT5, combined with 0.266 petaflops from the existing Cray XT4 system, Zacharia said.

How fast is a quadrillion calculations per second? “One way to understand the speed is by analogy,” Zacharia said recently. “It would take the entire population of the Earth (more than 6 billion people), each of us working a handheld calculator at the rate of one second per calculation, more than 460 years to do what Jaguar at a quadrillion can do in one day.”

Related: National Center for Computational Sciences at ORNL site on Jaguar (photo from here) – Open Science Computer GridDonald Knuth, Computer ScientistSaving FermilabNew Approach Builds Better Proteins Inside a ComputerDoes the Data Deluge Make the Scientific Method Obsolete?

Federal Circuit Decides Software No Longer Patentable

The Bilski Decision Is In: Buh-Bye [Most] Business Methods Patents

This was an appeal against a rejection of a business methods patent, and the appeals court has now agreed with the rejection. At issue was whether an abstract idea could be eligible for patent protection. The court says no.

Buh-bye business methods patents!

Court Reverses Position on Biz Methods Patents

The ruling in the case, called In re Bilski, largely disavowed the controversial State Street Bank case of 1998. There, the Federal Circuit opened the door to business method patents, which had until then been excluded from patent protection, by granting protection to a system for managing mutual fund accounts. The decision, according to its detractors — which included several members of the Supreme Court — had led to the issuance of weak patents and exposed financial services companies to high-dollar litigation over business method patents.

Related: Ex Parte Bilski: On the BriefsPatent Policy Harming USA, and the worldAre Software Patents Evil?Patent Gridlock is Blocking Developing Lifesaving DrugsThe Pending Problem With PatentsMore Evidence of the Bad Patent System

Problems Programming Math

Arithmetic Is Hard – To Get Right by Mark Sofroniou

I’ve been working on arithmetic in Mathematica for more than 12 years. You might think that’s silly; after all, how hard can arithmetic be?

The standard “schoolbook” algorithms are pretty easy. But they’re inefficient and often unnecessarily inaccurate. So people like me have done a huge amount of work to find algorithms that are more efficient and accurate. And the problem is that these algorithms are inevitably more complicated, and one has to be very careful to avoid insidious bugs.

Take multiplying integers, for example. The standard “schoolbook” long-multiplication algorithm uses n^2 multiplications to multiply two n-digit numbers. But many of these multiplications are actually redundant, and we now know clever algorithms that take n^1.58, n log n, or even fewer multiplications for large n. So this means that if one wants to do a million-digit multiplication, Mathematica can do it in a fraction of a second using these algorithms–while it would take at least a few minutes using standard long multiplication.

It’s not easy to get reliable numerical computation, and it’s not something one can “bolt on” after the fact. It’s something one has to build in from the beginning, as we’ve done in Mathematica for nearly 20 years.

Related: Who Killed the Software Engineer?Sexy MathFreeware Math Programs1=2: A ProofThings You Need to be a Computer Game Programmer

Algorithmic Self-Assembly

Paul Rothemund, scientist at Cal Tech, provides a interesting look at DNA folding and DNA based algorithmic self-assembly. In the talk he shows the promise ahead for using biological building blocks using DNA origami — to create tiny machines that assemble themselves from a set of instructions.

Algorithmic Self-Assembly of DNA Sierpinski Triangles, PLoS paper.

I posted a few months ago about how you can participate in the protein folding, with the Protein Folding Game.

Related: Viruses and What is LifeDNA Seen Through the Eyes of a CoderSynthesizing a Genome from ScratchEvidence of Short DNA Segment Self AssemblyScientists discover new class of RNA

Autonomous Helicopters Teach Themselves to Fly

photo of Stanford Autonomous Learning Helicopters

Stanford’s “autonomous” helicopters teach themselves to fly

Stanford computer scientists have developed an artificial intelligence system that enables robotic helicopters to teach themselves to fly difficult stunts by watching other helicopters perform the same maneuvers.

The dazzling airshow is an important demonstration of “apprenticeship learning,” in which robots learn by observing an expert, rather than by having software engineers peck away at their keyboards in an attempt to write instructions from scratch.

It might seem that an autonomous helicopter could fly stunts by simply replaying the exact finger movements of an expert pilot using the joy sticks on the helicopter’s remote controller. That approach, however, is doomed to failure because of uncontrollable variables such as gusting winds.

Very cool. Related: MIT’s Autonomous Cooperating Flying VehiclesThe sub-$1,000 UAV Project6 Inch Bat PlaneKayak Robots