AI Algorithm Masters Space Invaders in All-Night Gaming Session

Singularity Hub
AI Algorithm Masters Space Invaders in All-Night Gaming Session


Remember Space Invaders? The arcade game and later Atari hit pitted a lone pixellated laser cannon against a swarm of equally pixellated descending aliens. Maybe you enjoyed the game occasionally, or maybe you stayed up all night seeking mastery.

If it was the latter—you now share that experience with a machine.

In a recent interview, Demis Hassaabis—founder of artificial intelligence firm DeepMind, acquired last year by Google for over $500 million—talked AI and showed video of one of his group’s deep learning algorithms killing it at Space Invaders.

Hassaabis says artificial intelligence is the science of making intelligent machines. There are two ways to do that—pre-program special solutions that the machine automatically executes or make a machine with no particular skills but the general ability to learn from its experiences and incoming environmental information.

Most intelligent programs to date are of the former category. The programs of the future, programs like those being developed by DeepMind and others, will be able to learn on the fly and improve their skills without further human intervention.

Early demonstrations of these learning algorithms are simple. DeepMind is famous for its video game playing programs, for example. And they are cool. Hassaabis says the software in the video went from terrible to superhuman in about eight hours of play.

But Google didn’t pay half a billion dollars for a 70s-era AI gamer.

Perhaps the most obvious reason they acquired DeepMind is the technology’s potential to improve search of text or images. And even that is likely too narrow in the longer run.

Google’s current fleet of self-driving cars, for example, are pretty amazing. But they don’t learn. They aren’t flexible in a world of endless variety, relying instead on programmers to account for as many situations as they can. A Sisyphean task.

That isn’t to say we can’t get a high degree automation without deep learning. But a fully self-driving car will likely require more flexibility on the fly than is possible now.

Or consider Google’s acquisition of eight robotics firms last December. Robots will remain glorified Roombas until they can learn and interact with their surroundings. Perhaps Google will pair deep learning with future robots in the factory or home.

Hassaabis, at least, thinks we’ll see personal robots and self-driving cars in the next five, ten, or fifteen years. But he’s even more excited when he imagines what happens when powerful artificial intelligence programs start tackling the biggest challenges.

“Macroeconomics, climate change, disease, energy—the science of these comes down to crunching masses of information. It’s too much for even very smart human scientists to fully understand. We’re probably missing things. I think we need aids like artificial intelligence technology to…make better use of this data for the good of society.”

Image Credit:


Million Robot Revolution Delayed—iPhone Manufacturer Foxconn Hires More Humans

Singularity Hub
Million Robot Revolution Delayed—iPhone Manufacturer Foxconn Hires More Humans


Terry Gou is CEO of electronics manufacturer Foxconn. He’s also a big proponent of replacing humans with robots in factories. Gou said Foxconn would replace human workers with a million robots in three years. That was three years ago.

Since that first announcement, Gou has indeed pursued robotics, developing his own robotic arms (or Foxbots as they’re called) to replace humans in his automated factories of the future. But his million robot workforce has yet to materialize.

What has materialized?

Earlier this year, Foxconn said it was preparing to deploy 10,000 Foxbots costing $20,000 to $25,000 to make iPhones. It was said the robots could produce some 30,000 devices a year and Foxconn would add some 30,000 robots annually.

Car assembly lines have long been automated, but some manufacturing jobs still require a human touch.

Car assembly has long been automated, but some manufacturing still requires a human touch.

It isn’t a million robots, but would represent a pretty serious challenge to human workers if accurate and scaleable. To date, the reason factories like Foxconn’s aren’t fully automated is because robots are unable to match the dexterity of human hands and lack the judgement to perform quality control checks on the assembly line.

But instead of the bots driving mass layoffs, the firm reportedly hired a record 100,000 human workers to cope with demand for the latest iPhone. Further, the robots, it was said, would merely assist existing human workers, not replace them.

And then it surfaced that Gou was dissatisfied with his first generation Foxbots. They were not up to snuff in terms of proficiency and flexibility. Generation two is forthcoming. But it’s apparent Gou’s million robot revolution is nowhere in sight.

So who cares if a CEO made an overexuberant forecast? For one, too often big claims make headlines and aren’t scrutinized down the road to see how well they hold up. But there’s another good reason to keep checking in on Gou’s Foxbots.

In the past, we’ve written about worries that robots will replace humans and cause structural unemployment—that is, non-cyclical long-term joblessness. To a degree, Foxconn’s progress represents an early benchmark for such concerns.

Here we have a manufacturing giant with every reason to pour resources into automation. They’ve had three years to develop robots dexterous enough to maneuver circuit boards, place touch screens, and generally automate processes.

Factory robots still require precisely programmed conditions to work well.

Most factory robots still require precisely programmed conditions to work.

The task, however, has proven more difficult than it first appeared, the development has been slower, and human-level performance harder to match. Further, the number of bots is less than promised by two orders of magnitude, and the number of human workers required isn’t down at all—in fact, at least for now, it’s rising.

I don’t take this as evidence the robot revolution won’t happen. Or that it’ll be much slower than expected. But I do think it offers insight into how hard robotics still is—particularly when it comes to physical tasks humans can do without blinking an eye.

Common wisdom has it that the first wave of robots automated manual and physical tasks. But that’s not quite right. There is are still a significant number of manual and physical jobs that are much more easily and cost-effectively performed by humans.

This class of labor includes any job that is unpredictable from one iteration to the next. Such labor might require the worker see and react to a changing environment, to move their position and carefully perform the task from a different orientation, to sort objects of varying size and shape, or to make judgment calls on inspection.

Today’s robots, for example, could never build a house on their own. However, although robots aren’t capable of such feats yet—they likely will be in the future.

For example, computer vision, a key component necessary for recognizing and adapting to changing environments, is progressing at a rapid clip currently.

Team Schaft's robot opening a door at the 2013 Darpa Robotics Challenge trials.

Team Schaft’s robot opening a door at the 2013 Darpa Robotics Challenge trials.

The accuracy with which machines can look at a picture or video feed and recognize what they see doubled in the last year. Already there are robots, like the one made by Google-acquisition Industrial Perception, that can look at a stack of haphazardly stacked boxes, recognize their orientations, and decide how best to pick them up.

The robots taking part in the DARPA Robotics Challenge still readily show their limitations—but they also show how much is within reach in the coming years. Challenges include opening doors, driving cars, and using tools made for humans.

Even so, I wonder if intelligent programs may, counterintuitively, replace many jobs of the mind before robots take over all manual and “unskilled” labor—in AI and robotics, the latter problem has pretty consistently proven the harder nut to crack.

The creators of Siri, for example, are hard at work on a new digital assistant that some experts say is the future of intelligent agents. And they’re not alone—Google and Facebook have been collecting AI experts like baseball cards in recent years.

Intelligent, natural language software will be useful on a smartphone or in an automated home—but it could also mark the end of call centers in India or China.

Robot writers are already constructing formulaic earnings reports and sports recaps. It’s not a terribly far reach to find them searching out multiple primary sources, parsing them for facts, and blogging secondary news articles. Watson’s natural language processing abilities already approximate such a process.

future-robot-foxconn-article-3What about the ultimate extrapolation, where robots do everything humans do only much better? That isn’t in sight yet. But it’s certainly conceivable in the coming years.

One of Arthur C. Clarke’s famous three laws of prediction is, “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”

So if robots and artificial intelligence match and then thoroughly whip humans at their own game—what then? That is the billion dollar question with no answer.

Some predict a leisure society, some mass unemployment and misery. But why utopia or dystopia? Perhaps it will simply be the real world. A rocky transition in the short run—as we’ve seen in historical episodes of steep productivity gains—and a new economy later on, complete with a wide variety of jobs we simply can’t imagine right now.

Image Credit: Shutterstock.comDARPASteve Jurvetson

Cancer Metastasis Reduced Up to 90% in Mice Using Engineered Decoy Protein

Singularity Hub
Cancer Metastasis Reduced Up to 90% in Mice Using Engineered Decoy Protein


Cancer often begins in one part of the body but spreads elsewhere via the bloodstream or lymphatic system. This spreading, called metastasis, makes the disease deadly and difficult to halt—even using chemotherapy drugs with serious side effects.

However, in a preclinical study in mice, a Stanford team may have discovered another way to slow or even stop tumors from metastasizing.

Cancer spreads when certain proteins link up on cells, causing them to break off from the tumor. The Stanford study focused on two such proteins. One protein (Axl) forms bristle-like receptors on the cell’s surface tailored to fit the other protein (Gas6).

When two Gas6 proteins interact with two Axl proteins, the cancer cells are able to drift away from the original tumor and form new tumors in other areas.

To prevent this interaction, the scientists engineered a decoy Axl protein that is as much as a hundred times as effective at binding with Gas6 as the naturally occuring version. When deployed in the blood, the decoy proteins can bind Gas6 proteins before they have a chance to interact with the Axl proteins on the cancer cells.

In a recent paper published in Nature, lead authors Jennifer Cochran, a Stanford associate professor of bioengineering, and Amato Giaccia, professor of radiation oncology, say the decoy protein significantly slowed metastasis in their study.

After testing the decoy protein in mice with breast cancer and ovarian cancer, the scientists found a 78% reduction in metastatic nodules in the breast cancer group and a 90% decrease in metastatic nodules in the ovarian cancer group.

And unlike current cancer drugs, the researchers say the decoy protein is nontoxic.

How the scientists engineered the protein is almost as fascinating as the results themselves. The team mimicked evolution—only at a vastly accelerated pace. Using advanced analytics software and lab equipment the team built and evaluated over ten million minor variants of the Axl protein to find the one that best fit Gas6.

The researchers hope their work may extend beyond Axl proteins. There are other receptors, Mer and Tyro3, that bind with Gas6 and are associated with metastasis—the decoys could further render both harmless as bound Gas6 wouldn’t interact with them.

There is, of course, a long way to go before a therapy based on the group’s findings might make its way into the mainstream. They’ll have to scale production of the protein, complete more animal tests, and eventually, do human trials.

In the meantime, however, it offers a ray of hope for a much more humane cancer treatment in the future and a fascinating glimpse into the promise of bioengineering.

Image Credit: protein courtesy of Shutterstock

This Week’s Awesome Stories from around the Web (Through Oct 11)

Singularity Hub
This Week’s Awesome Stories from around the Web (Through Oct 11)

communication digital

What’s the most fascinating, intriguing story you’ve read recently? The Hub team has put together our list of what we’re reading from around the web this week. Did we miss anything? If so, add it to the comments.

SOCIAL MEDIA: Twitter Is Broken
David Auerbach | Slate
Rickey pointed to the 140-character limitation and its tendency to encourage mindless agreement (or disagreement), but that’s secondary to the bigger problem: that Twitter breaks the public/private divide in a way that people simply can’t cope with.

ROBOTSHere comes the future: We’re making robots that feel!
Diane Ackerman | Salon
Lipson’s brainchildren would be the first generation of truly self-reliant machines, gifted with free will by their soft, easily damaged creators. These synthetic souls would fend for themselves, learn, and grow—mentally, socially, physically—in a body not designed by us or by nature, but by fellow computers.

FUTURE OF WORK: Outsourced Jobs Are No Longer Cheap, So They’re Being Automated
Jason Koebler | Motherboard
“There’s been some ‘buyer’s remorse for massive decentralization of the workforce.’”

COMMUNICATION: Why It’s So Hard To Detect Emotion In Emails And Texts
Eric Jaffe | Co.Design
The lesson is a little face or phone time can go a long way toward exchanging more personality information, forming more positive impressions, and reducing email awkwardness. Short of that, it can help to use concrete emotional words in an email (e.g. “I’m happy to say…”), or to clarify someone’s tone (“when you said that, I took it to mean…”), or if you must, to dispatch emoticons.

EXPONENTIAL TECHNOLOGIES: Civilization: Beyond Earth and the Ultra-Cool Technologies of Tomorrow
Colin Campbell | Polygon
“‘Having a swarm of orbital entities that you’re interlinking with that provide information and insight on a global scale that you can’t get from a single point on the globe leads to all kinds of interesting things.’

SOCIETY: Confessional in the Palm of Your Hand
Rachel Metz | Technology Review
“Many of us are addicted to sharing status updates on Facebook, photos on Instagram, and thoughts on Twitter. But real, raw honesty is tricky online. It’s hard to say what you really think when your true identity is attached, especially if your post could get you in trouble, either now or years down the line.”

[Image credit: virtual love courtesy of Shutterstock]

Artificial Spleen ‘Cleans’ Blood of Pathogens

Singularity Hub
Artificial Spleen ‘Cleans’ Blood of Pathogens


In one of the gutsiest performances in sports history, NFL quarterback Chris Simms had to be carted off the field after taking several vicious hits from the defense during a game in 2006. Remarkably, Simms returned to the game shortly thereafter and led his team on a scoring drive before having to leave the game for good.

As it turns out, Simms had ruptured his spleen and lost nearly five pints of blood.

While you can live without your spleen, it serves several important functions in the body including making antibodies and maintaining a reservoir of blood. It also works to keep the blood clean by removing old blood cells and antibody-coated pathogens.

Now, scientists from Harvard’s Wyss Institute for Biologically Inspired Engineering in Boston have developed an artificial spleen that has been shown to rapidly remove bacteria and viruses from blood. The technology could be useful in many scenarios, including protecting people who suffer from immunodeficiencies and those infected with difficult to treat pathogens like Ebola virus. It also has great potential to reduce the incidence of sepsis, a leading cause of death that results from an infection that the immune system tries but fails to control effectively.

In the 2013 sci-fi thriller Elysium, the filmmakers imagined a futuristic body scanner that can quickly identify and treat almost any disease. While we may be far from an all-in-one machine that can handle any ailment, the artificial spleen developed by a Harvard team led by Dr. Donald Ingber could play a part in such a machine.

Their work, published last month in the journal Nature Medicine, was demonstrated to be effective in removing more than 90% of bacteria from blood.

While this device has potential to be a major advance in treating infections, the way it works is relatively straightforward. In most animals, a protein called mannose-binding lectin (MBL) binds to mannose, a type of sugar. Mannose is found on the outer surface of many pathogens, including bacteria, fungi and viruses. It is even found on some toxins that are produced by bacteria and contribute to illness.

Wyss Institute microfluidic biospleen.

Wyss Institute microfluidic biospleen.

Dr. Ingber’s team took a modified version of MBL and coated magnetic nanobeads with it. As the infected blood filters through the device, the MBL from the nanobeads binds to most pathogens or toxins that are around. As the blood then moves out of the device, a magnet grabs the magnetic nanobeads that have attached to the pathogens and removes them from the blood.

The blood can then be put right back into the patient, much cleaner than before.

In their initial experiments, the researchers used rats that had been infected with two common bacteria, Escherichia coli and Staphylococcus aureus. One group of rats was left untreated and the other group had their blood filtered using the new device. After five hours, 89% of the treated rats had survived while only 14% of the untreated rats were still alive.

The researchers also tested if the device could be effective for humans, which have about five liters of blood in an average adult. In five hours of testing, moving one liter of blood infected with bacteria and fungi through per hour, the device worked to remove the vast majority of the infectious bugs.

While five hours is a not a long time for patients who are hospitalized, it’s a bit long for patients who might be receiving outpatient treatment for an infection.

It is possible that as the design and function of the device is improved, it could work even faster than one liter per hour. The speed at which the artificial spleen is effective likely depends on several factors, including the pathogen load, the size of the patient (and thus their actual volume of blood) and the number of magnetic nanobeads in the device working to bind the pathogens.

Currently, the researchers are extending their experiments by testing the artificial spleen on a larger model animal, the pig.

If the device eventually makes it to market, it might provide a big boost to our arsenal against infectious microorganisms. It can bring the numbers of rapidly dividing bugs down to a level that can then make it easy for drugs or even just the immune system to finish them off, an important advancement for people who suffer from an immunodeficiency for any number of reasons. This device could also help reduce our overuse of antibiotics and give us a strong weapon against antibiotic-resistant bugs.

It might even find use in developing countries like those in Western Africa, where we are currently witnessing the devastation of the Ebola virus outbreak.

However, while many infectious bugs have mannose on their surface, not all of them do. Perhaps the 2.0 version of the artificial spleen will include proteins that can bind to other molecules on the surface of problematic microorganisms, leading us to closer to the all-in-one healing machine imagined in the futuristic world of Elysium.

Image Credit: Wyss Institute/Vimeo

A Blood Test for Depression Moves Closer to Reality

Singularity Hub
A Blood Test for Depression Moves Closer to Reality


With the recent and highly publicized death of actor Robin Williams, depression is once again making national headlines. And for good reason. Usually, the conversation about depression turns to the search for effective treatments, which currently include cognitive behavioral therapy and drugs such as selective serotonin reuptake inhibitors (SSRIs).

However, an equally important issue is the timely and proper diagnosis of depression.

Currently, depression is diagnosed by a physical and psychological examination, but it mostly depends on self-reporting of subjective symptoms like depressed mood, lack of motivation, and changes in appetite and sleep patterns. Many people who might want to avoid a depression diagnosis for various reasons can fake their way through this self-reporting, making it likely that depression is actually under-diagnosed.

Therefore, an objective test could be an important development in properly diagnosing and treating depression. Scientists at Northwestern University may have developed such a diagnostic tool, one that requires no more than a simple test tube of blood.

A team of researchers, led by Dr. Eva Redei and Dr. David Mohr, found that blood levels of nine biomarkers—molecules found in the body and associated with particular conditions—were significantly different in depressed adults when compared to non-depressed adults.

The new study, published in the current issue of Translational Psychiatry, builds upon earlier work by Dr. Redei, who found the levels of 11 other biomarkers to be different in depressed versus non-depressed teenagers.

While this work is still in the early stages, it represents an important advancement in the correct diagnosis and treatment of major depressive disorder, which currently has a lifetime prevalence of 17% of the adult population in the US.

The new test looks at the levels of nine RNA markers in the patient’s blood. In case you forgot your molecular biology, RNA is the messenger molecule made using DNA as a template. In turn, cells then use RNA as a template to make proteins, which are the actual machines that do the work specified by the DNA.

The idea is that some genes can be overexpressed or underexpressed during depression and if those genes can be identified by looking at the RNA made from them, an objective way of identifying depression could be developed.

The researchers from Northwestern initially looked at the RNA levels of 26 genes in adolescents and young adults (ages 15-19). They found that 11 of them were significantly different in patients that had previously been diagnosed with depression versus those who were never diagnosed. They then tried to apply the same approach to adults and found significant differences in nine biomarkers.

Interestingly, the biomarkers that were found to differ in depressed versus non-depressed patients were actually different in young people and adults, implying that depression is genetically different when comparing different age groups.

Furthermore, the test was also able to identify differences between those who went through cognitive behavioral therapy and showed improvement versus those who showed no improvement after therapy. This kind of information can help predict the proper treatment regimen for patients, increasing their chance of going into remission.

While this work is still in the early stages and will take more studies to establish its accuracy and clinical usefulness, it represents an exciting development in diagnostics.

As medicine and genomics continue to change with the development of ever-increasingly powerful and cost-effective technology, we expect to get better at identifying and treating diseases before they take a significant toll.

These new blood tests have the potential to change our approach to diagnosing depression as well as selecting the proper treatment.

Sadly, many depression patients are resistant to the most common treatments so the need for new, more effective treatments is great. Finding differences in the expression of certain genes during depression could even lead to a clearer understanding of the disease, which in turn could lead to improved treatments down the road.

Depression is a thief that steals a productive and enjoyable life from far too many people around the world. Hopefully, this line of research will one day make it a thing of the past like smallpox.

Image Credit:

Navy’s Boat Drones Pack Hunt Like Wolves on Water

Singularity Hub
Navy’s Boat Drones Pack Hunt Like Wolves on Water


The US military is building a droid (er, drone) army. You’ve likely heard of flying drones—but the robot arms race won’t end there. The Navy recently demonstrated a pack of autonomous boats performing defensive and offensive swarm tactics.

Why robots? The military hopes to augment its human forces with superhuman robotic abilities, while at the same time putting fewer soldiers in harm’s way. The Navy’s autonomous boat program was motivated, in part, by the 2000 terrorist attack on the USS Cole, in which a boat brimming with explosives rammed the destroyer’s hull.

In the future, armed autonomous boats might approach such threats, take fire without risking human lives, and if necessary, neutralize or destroy the adversary.

The test operations—which took place in August but were only recently revealed by the Office of Naval Research (ONR)—involved up to 13 standard rigid-hulled inflatable boats using automation technology originally developed for NASA’s Mars rovers.

The exercise was split into defensive and offensive systems tests conducted on the James River in Virginia. The fleet of robot boats first acted as escort to a larger ship before spotting a designated “enemy” vessel across the river. Eight of the ships then peeled off the main formation and surrounded the enemy ship.


Standard issue Navy boats equipped with AI system can pilot themselves.

We’ve long been able to remotely pilot unmanned boats and flying machines, but this latest generation incorporates more autonomy. The Navy’s swarmboats run smart software called Control Architecture for Robotic Agent Command and Sensing (CARACaS). The system handles individual vessels and coordinates the larger group.

The boats use radar to calculate their individual paths through the water, avoiding obstacles and maintaining position relative to the other boats. Meanwhile, each individual shares its radar with the group to construct a picture of the whole.

Perhaps the most uncharacteristic part of the technology, however, is its cost.

The boats themselves are standard issue—instead of cutting-edge, specially tailored marvels that take a decade to develop—and outfitting them with the necessary artificial intelligence and communications equipment costs only $2,000.

Upgrading the current fleet, the Navy says, would cost thousands, not millions. And because the boats are relatively cheap and pilotless, the Navy can overwhelm targets with numbers and liberally employ kamikaze tactics, where some individuals sacrifice themselves to ensure the success of the larger strategy.


Are robot swarms of the sea like this one the future of naval warfare?

Further, a team instantly and electronically sharing and acting on information from a variety of viewpoints can accomplish things none of its individuals could accomplish acting independently—the whole is greater than the sum of its parts.

Flying drones, while more expensive, are similarly being developed with more autonomy in mind. While early drones were piloted remotely, the Navy’s X-47B can fly itself and even autonomously land on an aircraft carrier. The Navy expects advanced drone fighter jets might also make use of swarm tactics in the air.

Along with autonomous war machines comes the question of ethics: Just how autonomous are we talking here? The Navy says their autonomous boats will be able to locate and engage adversaries—but a human will always pull the trigger.

This is in line with a position the Pentagon outlined in a 2013 policy directive.

In the future, then, perhaps warriors removed from the immediate threat to life and limb will be able to more cooly survey situations. Reducing heightened levels of fear and adrenaline could potentially reduce civilian and “friendly fire” casualties.

Alternatively, distance might make combat too game-like and encourage unethical decisions. And no robot or AI system is perfect, nor is wireless communication completely reliable—especially on the battlefield. Further, military robots won’t be used by the US alone, but other nations with other guidelines too. A broad discussion and agreement on the ethical use of the machines is needed (and indeed, is underway).

In the meantime, expect to see increasingly autonomous robots making their way toward the battlefield. The Navy says their robotic swarmboats could be rolled out in a year for escort missions in dangerous areas. They hope to first improve the boats’ navigation and sensing abilities, but believe they’ll eventually be widely used.

Importantly, those uses won’t be solely military. Pilotless cargo ships, for example, may soon cross the seas with skeleton crews. Driverless trucks will do the same on land. Drones may deliver Amazon goodies the same day or medicine to the rural poor.

Such technology will likely have more peaceful applications than violent ones.

Image Credit: Office of Naval Research/YouTube

Unlocking Big Data: Lessons Learned From The God Particle

Singularity Hub
Unlocking Big Data: Lessons Learned From The God Particle

It’s a puzzle wrapped in an enigma wrapped in a symphony. It’s the Higgs boson, the so-called God particle, the greatest physics find of the 21st century, turned into music.

Chamber music, to be exact.

Admittedly—and especially for fans of Pythagoras—this conversion is a little mind-blowing. But once you get beyond the cosmic significance, what’s equally interesting is that the resulting symphony—aptly titled “LHC Chamber Music “(with LHC being short for Large Hadron Collider, the particle accelerator that helped us find the Higgs)—gives us a window into the future of data visualization and creative innovation.

But first, the music.

To commemorate the 60th anniversary of CERN—the Swiss institute where the LHC is housed—scientists converted Higgs measurement data into two pieces of music—a piano composition and a full chamber orchestra symphony. The conversion, known as a “sonification,” involves assigning notes to numbers, with the numbers representing “particle collision events per unit of mass.”

An example of simulated data modelled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Here, following a collision of two protons, a is produced which decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the proton-proton collision in the detector while the energy these particles deposit is shown in blue.

An example of simulated data modelled for the CMS particle detector on the Large Hadron Collider (LHC) at CERN. Here, following a collision of two protons, a is produced which decays into two jets of hadrons and two electrons. The lines represent the possible paths of particles produced by the proton-proton collision in the detector while the energy these particles deposit is shown in blue.

In other words, every time the calculations spit out the number 25, that bit of data is converted to a middle C. Then 26 becomes D and 27 F and so on.

This whole process works—meaning it produces something that sounds like music—because “harmonies in natural phenomena,” as the LHC Open Symphony blog recently pointed out, “are related to harmonies in music.”

At a macroscopic level, the purpose of the sonification was to give non-science types an intuitive sense of the vast complexity of the Higgs boson and, as physicist and the music’s composer Domenico Vicianza said: [to] be a metaphor for scientific collaboration; to demonstrate the vast and incredible effort these projects represent—often between hundreds of people across many different continents.”

In other words, the Higgs sonification is also a data visualization technique (in this case, data acoustification), meaning it gives us a different way to interact with huge amounts of information, a different way to try and detect novel patterns.

Why is this a big deal? Big data is the deal. As we all know, the modern world is awash in data. And while we’re starting to get better at utilizing this information, there’s still a very long way to go.

The problem is not pattern recognition. Turns out, we humans are actually great at pattern recognition (which is why, for example, projects like Foldit are so successful). Our trouble starts with holding giant data sets in our heads—which is not an ability we’re all that good at (which is why computers are better at playing chess than humans—better access to giant data sets allows for brute force solutions).

Put differently, right now, the biggest hurdle to big data is that there is no user-friendly interface for big data. No way in for the common person.

Think about the ARPANET, the precursor to the Internet. Made operational in 1975, ARPANET was mostly text-based, complicated to navigate, and used mainly by scientists. All of this changed in 1993, when Marc Andreessen coauthored Mosaic MOS -1.3%—both the very first web browser and the Internet’s first user-friendly user interface. Mosaic unlocked the Internet. By adding in graphics and replacing Unix with Windows—the operating system that was then running nearly 80 percent of the computers in the world—Andreessen mainstreamed a technology developed for scientists, engineers, and the military. As a result, a worldwide grand total of twenty-six websites in early 1993 mushroomed into more than 10,000 sites by August 1995, then exploded into several million by the end of 1998.

Today, no similar interface exists for big data. Ask a data scientist what the best way to take advantage of the big data revolution is and the most frequent answer is “hire a data scientist.” (this is from personal experience, as I’ve been asking this question for over a year now while researching my next book).

If we all have to become data scientists to take advantage of big data, well, that strikes me as a fairly inefficient way forward.

But sonification is one solution to how to represent big data sets in a way humans can comprehend. It’s a kind of user-friendly interface. As a result, one of the possibilities raised by the release of the Higgs symphony is that some listener might detect a novel pattern in the music, something the physicists involved have not noticed, something in the melody of the music that hints at deeper structure in the universe. Given the strength of the human pattern recognition system, this is not an impossibility.

To come at this from a different angle, I know of a number of different teams working to find novel ways to represent the stock market. One team is trying to find ways to represent the market as natural terrain like snow covered mountains. Why? Instead of turning on the computer to check how your stocks are performing, you could instead don virtual reality goggles and ski the stock market.

The idea being that bringing multiple sensory streams to the process of processing stock market data might a) help us assimilate the data more quickly b) potentially unlock hidden patterns in the data.

And this is nowhere as weird as it sounds. Our subconscious is capable of astounding pattern detection. But the visual perception system is only one of a myriad of possible inputs to an information processing system. Consider that fifty percent of your nerve endings are in your hands, feet and face. Each of those nerve endings represents data processing power. Right now, we’re only using visual information (numbers read off a screen) to analyze the stock market, but engaging more senses means unlocking more processing power means—quite possibly—better analysis.

And better data analysis leads, obviously, to better innovation.

[Image credit: Wikipedia, fractal image courtesy of Shutterstock]

How Long Would It Take to Mine Bitcoin by Hand?

Singularity Hub
How Long Would It Take to Mine Bitcoin by Hand?


Bitcoin is a decentralized, digital currency. It was invented by a mysterious individual known by the handle, Satoshi Nakamoto. A bitcoin is volatile but is currently worth about $380; regulators are increasingly interested; retailers too—true believers believe.

These headlines you’ve likely read. But where the hell do bitcoins come from anyway? They’re mined by computers making calculations lightning fast—or in this case, by a man with sixteen minutes’ free time, a pencil, and pad of paper.

Ken Shirriff is the hero of this story. (For more detail check out his blog post.)

His hand calculation of the bitcoin algorithm (SHA-256) is instructive in a few ways. First, it’s clear why bitcoin is called a cryptocurrency—it’s built on a series of cryptographic operations. Second, you can see there’s no shortcut around the algorithm: it takes a fixed amount of time, or processing power, to complete.


The specialized bitcoin mining chip, Bitfury.

But really, you don’t have to care about the details of bitcoin, or even the digital currency in general. This is a great bare bones glimpse into the operations that computers actually perform. They make the same calculations humans can—only unfathomably faster.

The rules are fairly simple (once you learn them) but the actual process is laborious. It took Shirriff 16 minutes, 45 seconds to complete a single round of the algorithm. A full bitcoin block (128 rounds) would take him about a day and a half.

“In comparison, current Bitcoin mining hardware does several terahashes per second, about a quintillion times faster than my manual hashing,” Shirriff estimates. “Needless to say, manual Bitcoin mining is not at all practical.”

According to Shirriff, the bitcoin algorithm is, in fact, one of the simpler ones. Litecoin, Dogecoin, and other cryptocurrencies (of which there is a growing list) use an algorithm that is more difficult to mine—their mining hardware is thousands of times slower.

Moral of the story? You’d have more luck mining unobtanium with your bare hands than cryptocurrency with a pencil and paper.

Image Credit:; Zeptobars

Elon Musk Is Right: Colonizing the Solar System Is Humankind’s Insurance Policy Against Extinction

Singularity Hub
Elon Musk Is Right: Colonizing the Solar System Is Humankind’s Insurance Policy Against Extinction


Why blow billions of dollars on space exploration when billions of people are living in poverty here on Earth?

You’ve likely heard the justifications. The space program brings us useful innovations and inventions. Space exploration delivers perspective, inspiration, and understanding. Because it’s the final frontier. Because it’s there.

What you haven’t heard is anything to inspire a sense of urgency. Indeed, NASA’s struggle to defend its existence and funding testifies to how weak these justifications sound to a public that cares less about space than seemingly more pressing needs.

spacex-reusable-rocketsPresumably, this is why SpaceX founder Elon Musk, in a fascinating interview with Ross Andersen, skipped all the usual arguments in favor of something else entirely. Space exploration, he says, is as urgent as easing poverty or disease—it’s our insurance policy against extinction.

As we extend our gaze back through geologic time and out into the universe, it’s clear we aren’t exempt from nature’s carelessly terrifying violence. We simply haven’t experienced its full wrath yet because we’ve only been awake for the cosmological blink of an eye.

Musk says an extinction-level event would, in an existential flash, make our down-to-earth struggles irrelevant. “Good news, the problems of poverty and disease have been solved,” he says, “but the bad news is there aren’t any humans left.’”

We’ve got all our eggs in one basket, and that’s a terrible risk-management strategy. We should diversify our planetary portfolio to insure against the worst—and soon.

Musk’s line of reasoning isn’t completely novel. It’s what led science fiction writer Larry Niven to say, “The dinosaurs became extinct because they didn’t have a space program.” And it drives Ed Lu’s quest to save humanity from a major asteroid hit.

But while we may spot and potentially derail asteroids, not every cosmic threat can be so easily predicted or prevented—a blast from a nearby supernova; a gamma ray burst aimed at Earth; a period of extreme volcanism. Any of these could wipe us out.

Musk says he thinks a lot about the silence we’ve been greeted with as our telescopes scan the sky for interstellar broadcasts from other civilizations.

seti-fermi-paradoxGiven the sheer number of galaxies, stars, and planets in the universe—it should be teeming with life. If even a tiny percent of the whole is intelligent, there should be thousands of civilizations in our galaxy alone. So where are they?

This is known as the Fermi Paradox, and Musk rattles off a few explanatory theories (there are many). But he settles on this, “If you look at our current technology level, something strange has to happen to civilizations, and I mean strange in a bad way. It could be that there are a whole lot of dead, one-planet civilizations.”

That something strange might be an evolutionary self-destruct button, as Carl Sagan theorized. We developed modern rockets at the same time as nuclear weapons.

But the Fermi Paradox and its explanations, while philosophically captivating, haven’t settled the question of intelligent life. SETI’s Seth Shostak cautions, “The Fermi Paradox is a big extrapolation from a very local observation.” That is, just because we don’t see compelling evidence of galactic colonization around here doesn’t mean there is none.

But even without the Fermi Paradox, our planet’s geologic record is enough to show that, as Sagan phrased it, “Extinction is the rule. Survival is the exception.”

So, if you buy Musk’s argument—what next? Well, he didn’t start SpaceX to boost telecommunication satellites into orbit or shuttle astronauts to low-Earth orbit. SpaceX is Musk’s vehicle to another planet, and he isn’t shy saying so.

spacex-resupply-missionLong after SpaceX sends its first human passengers to the space station; after it’s perfected reusable rockets; after it fires up the first Falcon Heavy deep space rocket—after all that, perhaps in the mid-2030s, Musk will found a colony on Mars.

Some colonists will be able to afford the $500,000 ticket, he says. Others will sell their earthly belongings—like the early American settlers—to book their trip. But it won’t be a pleasure cruise. No, we’re talking an all-in, one-way commitment to a cause.

Even so, getting people to go won’t be a problem. Mars One, an organization similarly dedicated to sending the first humans to Mars, had over 200,000 people apply for a few one-way tickets. Mars One may or may not make it to the Red Planet—but at the least they proved there are people willing to sacrifice the easy life to get there.

In the long run, however, to establish a permanent, sustainable presence on Mars, we’ll need a whole lot more than a scattering of rugged colonists.

Musk thinks it’ll take at least a million people to form a genetically diverse population and self-sufficient manufacturing base. All that in a freezing desert wasteland with no oil, oxygen, or trees. Mars has water but it’s not readily available. We’d have to mine the surface and set up heavy industry. It would be a mammoth undertaking.

Musk thinks it could happen in the next century. And perhaps he’s right. Perhaps not.

Daybreak_at_Gale_Crater_fullAs Andersen notes, although he’s on an “epic run…he is always giving you reasons to doubt him.” Monumental goals—with dates attached. A century is a long time. But SpaceX colonizing Mars might be a bridge too far. There are some who doubt our abilities in the near future.

Astrophysicist and Astronomer Royal, Martin Rees, has said, “I think it’s very important not to kid ourselves that we can solve Earth’s problems by mass emigration into space. There’s nowhere in our solar system even as clement as the top of Everest or the South Pole—so it’s only going to be a place for pioneers on cut-price private ventures and accepting higher risks than a western state could impose on civilians.”

In other words, maybe some people will venture beyond the Earth and Moon. Even live out subsistence-level lives on other planetary bodies. But a civilization growing out of Musk’s million isn’t likely. At least not until we can engineer on grander scales—terraform Mars, hollow out asteroids, build rolling bubble cities on Mercury.

In either case, Musk is right about one thing. It’s time we pushed the boundaries of space exploration. And whatever your opinion, you have to admire the man’s willingness to go out on a limb when no one else will—and invite the rest of us to join him there.

Image Credit: Shutterstock.comSpaceX; NASA/Wikimedia Commons