Category Archives: Technology

Artificial Intelligence Finds Ancient Indus Script Matches Spoken Language

Artificial Intelligence Cracks 4,000-Year-Old Mystery by Brandon Keim

An ancient script that’s defied generations of archaeologists has yielded some of its secrets to artificially intelligent computers.

The Indus script, used between 2,600 and 1,900 B.C. in what is now eastern Pakistan and northwest India, belonged to a civilization as sophisticated as its Mesopotamian and Egyptian contemporaries. However, it left fewer linguistic remains. Archaeologists have uncovered about 1,500 unique inscriptions from fragments of pottery, tablets and seals. The longest inscription is just 27 signs long.

They fed the program sequences of four spoken languages: ancient Sumerian, Sanskrit and Old Tamil, as well as modern English. Then they gave it samples of four non-spoken communication systems: human DNA, Fortran, bacterial protein sequences and an artificial language.

The program calculated the level of order present in each language. Non-spoken languages were either highly ordered, with symbols and structures following each other in unvarying ways, or utterly chaotic. Spoken languages fell in the middle.

When they seeded the program with fragments of Indus script, it returned with grammatical rules based on patterns of symbol arrangement. These proved to be moderately ordered, just like spoken languages.

Related: The Rush to Save Timbuktu’s Crumbling ManuscriptsThe Mystery of the Voynich ManuscriptAztec Math

Keeping Out Technology Workers is not a Good Economic Strategy

The barriers between countries, related to jobs, are decreasing. Jobs are more international today than 20 years ago and that trend will continue. People are going to move to different countries to do jobs (especially in science, engineering and advanced technology). The USA has a good market on those jobs (for many reasons). But there is nothing that requires those jobs to be in the USA.

The biggest impact of the USA turning away great scientists and engineers will be that they go to work outside the USA and increase the speed at which the USA loses its place as the leading location for science, engineering and technology work. This is no longer the 1960’s. Back then those turned away by the USA had trouble finding work elsewhere that could compete with the work done in the USA. If the USA wants to isolate ourselves (with 5% of the population) from a fairly open global science and engineering job market, other countries will step in (they already are trying, realizing what a huge economic benefit doing so provides).

Those other countries will be able to put together great centers of science and engineering innovation. Those areas will create great companies that create great jobs. I can understand wanting this to be 1960, but wanting it doesn’t make it happen.

You could go even further and shut off science and engineering students access to USA universities (which are the best in the world). That would put a crimp in plans for a very short while. Soon many professors would move to foreign schools. The foreign schools would need those professors, and offer a great deal of pay. And those professors would need jobs as their schools laid off professors as students disappeared. Granted the best schools and best professors could stay in the USA, but plenty of very good ones would leave.

I just don’t think the idea of closing off the companies in the USA from using foreign workers will work. We are lucky now that, for several reasons, it is still easiest to move people from Germany, India, Korea, Mexico and Brazil all to the USA to work on advanced technology projects. The advantage today however, is much much smaller than it was 30 years ago. Today just moving all those people to some other location, say Singapore, England, Canada or China will work pretty well (and 5 years from now will work much better in whatever locations start to emerge as the leading alternative sites). Making the alternative of setting up centers of excellence outside the USA more appealing is not a good strategy for those in the USA wanting science, engineering and computer programming jobs. We should instead do what we can to encourage more companies in the USA that are centralizing technology excellence in the USA.

Comment on Reddit discussion.

Related: Science and Engineering in Global EconomicsGlobal technology job economyCountries Should Encourage Immigration of Technology WorkersThe Software Developer Labor MarketWhat Graduates Should Know About an IT CareerRelative Engineering Economic PositionsChina’s Technology Savvy LeadershipEducation, Entrepreneurship and ImmigrationThe Future is EngineeringGlobal Technology Leadership

The Software Developer Labor Market

With the economy today you don’t hear much of a desperate need for programmers. But Dr. Norman Matloff, Department of Computer Science, University of California at Davis, testimony to Congress (Presented April 21, 1998; updated December 9, 2002) on Debunking the Myth of a Desperate Software Labor Shortage is full of lots of interesting information (for current and past job markets).

The industry says that it will need H-1B visas temporarily, until more programmers can be trained. Is this true?

No, it’s false and dishonest… The industry has been using this “temporary need” stall tactic for years, ever since the H-1B law was enacted in 1990. In the early- and mid-1990s, for example, the industry kept saying that H-1Bs wouldn’t be needed after the laid-off defense programmers and engineers were retrained, but never carried out its promise. It hired those laid off in low-level jobs such as technician (which is all the retraining programs prepared them for), and hired H-1Bs for the programming and engineering work.

Unlike Dr. Matloff, and many readers of this blog, I am actually not a big opponent of H-1B visas. I believe we benefit more by allowing tech savy workers to work in the USA than we lose. I understand people fear jobs are being taken away, but I don’t believe it. I believe one of the reasons we maintain such a strong programming position is due to encouraging people to come to the USA to program.

I also do believe, there are abuses, under the current law, of companies playing games to say no-one can be found in the USA with the proper skills. And I believe those apposed to H-1B visas make reasonable arguments and this testimony is a good presentation of those arguments.

This obsession with specific skills is unwarranted. What counts is general programming talent – hiring smart people – not experience with specific software technologies.

Very true.

What developers should do.

Suppose you are currently using programming language X, but you see that X is beginning to go out of fashion, and a new language (or OS or platform, etc.) Y is just beginning to come on the scene. The term “just beginning” is crucial here; it means that Y is so new that there almost no one has work experience in it yet. At that point you should ask your current employer to assign you to a project which uses Y, and let you learn Y on the job. If your employer is not willing to do this, or does not have a project using Y, then find another employer who uses both X and Y, and thus who will be willing to hire you on the basis of your experience with X alone, since very few people have experience with Y yet.

Good advice.

Related: IT Talent Shortage, or Management Failure?Preparing Computer Science Students for JobsEngineering Graduates Again in Great Shape (May 2008)What Graduates Should Know About an IT Careerposts related to computer programming
Continue reading

Build Your Own Tabletop Interactive Multi-touch Computer

This is a fantastic Do-It_Yourself (DIY) engineering story. Very interesting, definitely go read the whole article: Build Your Own Multitouch Surface Computer

First, some acknowledgments are in order. Virtually all the techniques used to create this table were discovered at the Natural User Interface Group website, which serves as a sort of repository for information in the multitouch hobbyist community.

In order for our setup to work, we needed a camera that senses infrared light, but not visible light. It sounds expensive, but you’d be surprised. In this section, we’ll show you how we created an IR camera with excellent resolution and frame-rate for only $35—the price of one PlayStation 3 Eye camera. “But that’s not an IR camera,” you say? We’ll show you how to fix that.

As it turns out, most cameras are able to sense infrared light. If you want to see first-hand proof that this is the case, try this simple experiment: First, find a cheap digital camera. Most cell phone cameras are perfect for this. Next, point it at the front of your TV’s remote control. Then, while watching the camera’s display, press the buttons on the remote. You’ll see a bluish-white light that is invisible to the naked eye. That’s the infrared light used by the remote to control the TV.

Like the computer, the projector we used for the build was something we scavenged up. The major concern for a projector to use in this kind of system is throw distance—the ratio between projection distance and image size. Short-throw projectors, which are sold by all the major projector brands, work the best for this kind of project, because they can be set up at the bottom of the cabinet and aimed directly at the surface. Unfortunately, they also tend to be more expensive.

Ever thrifty, we went with a projector we could use for free: an older home-theater projector borrowed from a friend. Because of the longer throw distance on this model, we had to mount the projector near the top of the cabinet, facing down, and use a mirror to reflect the image up onto the screen. For this we ordered a front-side mirror (a mirror with the reflective surface on the front of the glass, rather than behind it) to eliminate any potential “ghosting” problems, caused by dual reflections from the front and back of the glass in an ordinary mirror.

Related: Home Engineering: Gaping Hole CostumeVery Cool Wearable Computing Gadget from MIT‘DIY’ kidney machine saves girlHolographic Television on the WayAutomatic Cat FeederVideo Goggles

Google Server Hardware Design

Ben Jai, Google Server Platform Architect, discusses the Google server hardware design. Google has designed their own servers since the beginning and shared details this week on that design. As we have written previously Google has focused a great deal on improving power efficiency.

Google uncloaks once-secret server

Google’s big surprise: each server has its own 12-volt battery to supply power if there’s a problem with the main source of electricity. The company also revealed for the first time that since 2005, its data centers have been composed of standard shipping containers–each with 1,160 servers and a power consumption that can reach 250 kilowatts.

Efficiency is another financial factor. Large UPSs can reach 92 to 95 percent efficiency, meaning that a large amount of power is squandered. The server-mounted batteries do better, Jai said: “We were able to measure our actual usage to greater than 99.9 percent efficiency.”

Related: Data Center Energy NeedsReduce Computer WasteCost of Powering Your PCCurious Cat Science and Engineering Search

Robot Independently Applies the Scientific Method

Robot achieves scientific first by Clive Cookson

A laboratory robot called Adam has been hailed as the first machine in history to have discovered new scientific knowledge independently of its human creators. Adam formed a hypothesis on the genetics of bakers’ yeast and carried out experiments to test its predictions, without intervention from its makers at Aberystwyth University.

The result was a series of “simple but useful” discoveries, confirmed by human scientists, about the gene coding for yeast enzymes. The research is published in the journal Science.

Adam is the result of a five-year collaboration between computer scientists and biologists at Aberystwyth and Cambridge universities.

The researchers endowed Adam with a huge database of yeast biology, automated hardware to carry out experiments, supplies of yeast cells and lab chemicals, and powerful artificial intelligence software. Although they did not intervene directly in Adam’s experiments, they did stand by to fix technical glitches, add chemicals and remove waste.

“Adam is a prototype,” says Prof King. “Eve is better designed and more elegant.” In the new experiments, Adam and Eve will work together to devise and carry out tests on thousands of chemical compounds to discover antimalarial drugs.

Very cool.

Related: Autonomous Helicopters Teach Themselves to Fly10 Most Beautiful Physics ExperimentsFold.it – the Protein Folding Gameposts on robots

Robot with Biological Brain

The Living Robot by Joe Kloc

Life for Warwick’s robot began when his team at the University of Reading spread rat neurons onto an array of electrodes. After about 20 minutes, the neurons began to form connections with one another. “It’s an innate response of the neurons,” says Warwick, “they try to link up and start communicating.”

For the next week the team fed the developing brain a liquid containing nutrients and minerals. And once the neurons established a network sufficiently capable of responding to electrical inputs from the electrode array, they connected the newly formed brain to a simple robot body consisting of two wheels and a sonar sensor.

At first, the young robot spent a lot of time crashing into things. But after a few weeks of practice, its performance began to improve as the connections between the active neurons in its brain strengthened. “This is a specific type of learning, called Hebbian learning,” says Warwick, “where, by doing something habitually, you get better at doing it.”

“It’s fun just looking at it as a robot life form, but I think it may also contribute to a better understanding of how our brain works,” he says. Studying the ways in which his robot learns and stores memories in its brain may provide new insights into neurological disorders like Alzheimer’s disease.

Related: Roachbot: Cockroach Controlled RobotRat Brain Cells, in a Dish, Flying a PlaneHow The Brain Rewires ItselfBrain Development

Cardiac Cath Lab: Innovation on Site

photo of Cath LabPhoto of John Cooke at the Cardiac Catheterisation Labs at St. Thomas’ hospital in London

I manage several blogs on several topics that are related. Often blog posts stay firmly in the domain of one blog of the other. Occasionally the topic blurs the lines between the various blogs (which I like). This post ties directly to my Curious Cat Management Improvement Blog. The management principles I believe in are very similar to engineering principles (no surprise given this blog). And actual observation in situ is important – to understand fully the situation and what would be helpful. Management relying on reports instead of seeing things in action results in many poor decisions. And engineers doing so also results in poor decisions.

Getting to Gemba – a day in the Cardiac Cath Lab by John Cooke

I firmly believe that it is impossible to innovate effectively without a clear understanding of the context and usage of your final innovation. Ideally, I like to “go to gemba“, otherwise known as the place where the problem exists, so I can dig for tacit knowledge and observe unconscious behaviours.

I didn’t disgrace myself and I’ve been invited back for another day or so. What did I learn that I didn’t know before? The key things I learnt were:

  • the guide wire isn’t just a means of steering the catheter into place as I thought. It is a functional tool in it’s own right
  • Feel is really critical to the cardiologist
  • There is a huge benefit in speeding up procedures in terms of patient wellbeing and lab efficiency
  • Current catheter systems lack the level of detection capability and controllability needed for some more complex PCIs (Percutaneous Cardiac Interventions)

The whole experience reminded me that in terms of innovation getting to gemba is critical. When was the last time you saw your products in use up-close and personal?

Related: Jeff Bezos Spends a Week Working in Amazon’s Kentucky Distribution CenterToyota Engineering Development ProcessMarissa Mayer on Innovation at GoogleBe Careful What You MeasureS&P 500 CEOs are Often Engineering GraduatesExperiment Quickly and Often

Google Summer of Code 2009

Google Summer of Code is a global program that offers student developers stipends to write code for various open source software projects. Google funds the program with $4,500 for each student (and pays the mentor organization $500). Google works with several open source, free software, and technology-related groups to identify and fund projects over a three month period.

Since its inception in 2005, the program has provided opportunities for nearly 2500 students, from nearly 100 countries. Through Google Summer of Code, accepted student applicants are paired with a mentor or mentors from the participating projects, thus gaining exposure to real-world software development scenarios and the opportunity for employment in areas related to their academic pursuits. In turn, the participating projects are able to more easily identify and bring in new developers. Best of all, more source code is created and released for the use and benefit of all.

Google funded approximately 400 student projects in 2005, 600 in 2006, 900 in 2007 and 1125 in 2008 and will be funding approximately 1,000 student projects in 2009.

Applying for the program is only allowed from March 23rd through April 3rd. Still a short period of time but in previous years they have only taken them for one week. Organizations hosting students include: Creative Commons, MySQL, Debian, The Electronic Frontier Foundation/The Tor Project, haskell.org, Grameen Foundation USA, National Center for Supercomputing Applications, Ruby on Rails, Wikimedia Foundation and WordPress. See the full list of organizations and link to descriptions of the projects each organization offers.

See the externs.com internship directory (another curiouscat.com ltd. site) for more opportunities including those in science and engineering.

Related: Google Summer of Code Projects 2008posts on fellowships and scholarshipsLarry Page on How to Change the Worldcomic on programmersInterview of Steve Wozniak

Mobile users at risk of ID theft

Mobile users at risk of ID theft

Security experts say that password protection and, where possible, data encryption, is essential. The advent of smartphones has seen the types of information that pass through handsets proliferate and it is now much more common to store sensitive information and work-related details on handsets.

But the storage of increasingly personal information is also on the rise; the survey found that 16% of people store their bank details on their phones and nearly a quarter store PIN numbers and passwords.

Security experts agree that the storage of such crucial details is ill-advised, and advise users to take advantage of the available security features of a phone.

Related: Freeware Wi-Fi app turns iPod into a PhoneEliminate Your Phone BillLack of Security of Electronic Voting