July 2014
Mon Tue Wed Thu Fri Sat Sun
 123456
78910111213
14151617181920
21222324252627
28293031  

Month July 2014

The Uncanniest Valley: What Happens When Robots Know Us Better Than We Know Ourselves?

Singularity Hub
The Uncanniest Valley: What Happens When Robots Know Us Better Than We Know Ourselves?

uncanny_valleyedThe “uncanny valley” is a term coined by Japanese roboticist Mashahiro Mori in 1970 to describe the strange fact that, as robots become more human-like, we relate to them better—but only to a point. The ”uncanny valley” is this point.

The issue is that, as robots start to approach true human mimicry, when they look and move almost, but not exactly, like a real human, real humans react with a deep and violent sense of revulsion.

This is evolution at work. Biologically, revulsion is a subset of disgust, one of our most fundamental emotions and the by-product of evolution’s early need to prevent an organism from eating foods that could harm that organism. Since survival is at stake, disgust functions less like a normal emotion and more like a phobia—a nearly unshakable hard-wired reaction.

Psychologist Paul Ekman discovered that disgust, alongside contempt, surprise, fear, joy, and sadness, is one of the six universally recognized emotions. But the deepness of this emotion (meaning its incredibly long and critically important evolutionary history) is why Ekman also discovered that in marriages, once one partner starts feeling disgust for the other, the result is almost always divorce.

Why? Because once disgust shows up the brain of the disgust-feeler starts processing the other person (i.e. the disgust trigger) as a toxin. Not only does this bring on an unshakable sense of revulsion (i.e. get me the hell away from this toxic thing response), it de-humanizes the other person, making it much harder for the disgust-feeler to feel empathy. Both spell doom for relationships.

Now, disgust comes in a three flavors. Pathogenic disgust refers to what happens when we encounter infectious microorganisms; moral disgust pertains to social transgressions like lying, cheating, stealing, raping, killing; and sexual disgust emerges from our desire to avoid procreating with “biologically costly mates.” And it is both sexual disgust and pathogenic that creates the uncanny valley.

To protect us from biologically costly mates, the brain’s pattern recognition has a hair-trigger mechanism for recognizing signs of low-fertility and ill-health. Something that acts almost human but not quite, reads—to our brain’s pattern recognition system—as illness.

And this is exactly what goes wrong with robots. When the brain detects human-like features—that is, when we recognize a member of our own species—we tend to pay more attention. But when those features don’t exactly add up to human, we read this as a sign of disease—meaning the close but no cigar robot reads as a costly mate and a toxic substance and our reaction is deep disgust.

uncanny_valley

Repliee Q2. Taken at Index Osaka Note: The model of Repliee Q2 is probably same as Repliee Q1expo, Ayako Fujii, announcer of NHK.

But the uncanny valley is only the first step in what will soon be a much more peculiar progress, one that will fundamentally reshape our consciousness. To explore this process, I want to introduce a downstream extension of this principle—call it the uncanniest valley.

The idea here is complicated, but it starts with the very simple fact that every species knows (and I’m using this word to describe both cognitive awareness and genetic awareness) its own species the best. This knowledge base is what philosopher Thomas Nagel explored in his classic paper on consciousness: ”What Is It Like to Be A Bat.” In this essay, Nagel argues that you can’t ever really understand the consciousness of another species (that is, what it’s like to be a bat) because each species’ perceptual systems are hyper-tuned and hyper-sensitive to its own sensory inputs and experiences. In other words, in the same way that “game recognizes game,” (to borrow a phrase from LL Cool J), species recognize species.

And this brings us to Ellie, the world’s first robo-shrink. Funded by DARPA and developed by researchers at USC’s Institute for Creative Studies, Ellie is an early iteration computer simulated psychologist, a bit of complicated software designed to identify signals of depression and other mental health problems through an assortment of real-time sensors (she was developed to help treat PTSD in soldiers and hopefully decrease the incredibly high rate of military suicides) .

At a technological level, Ellie combines a video camera to track facial expressions, a Microsoft Kinect movement sensor to track gestures and jerks, and a microphone to capture inflection and tone. At a psychological level, Ellie evolved from the suspicion that our twitches and twerks and tones reveal much more about our inner state than our words (thus Ellie tracks 60 different “features”—that’s everything from voice pitch to eye gaze to head tilt). As USC psychologist and one of the leads on the project, Albert Rizzo told NPR: [P]eople are in a constant state of impression management. They’ve got their true self and the self that they want to project to the world. And we know that the body displays things that sometimes people try to keep contained.”

6818431732_16c8be42ae_mMore recently, a new study just found that patients are much more willing to open up to a robot shrink than a human shrink. Here’s how Neuroscience News explained it: ”The mere belief that participants were interacting with only a computer made them more open and honest, researchers found, even when the virtual human asked personal questions such as, ‘What’s something you feel guilty about?’ or ‘Tell me about an event, or something that you wish you could erase from your memory.’ In addition, video analysis of the study subjects’ facial expressions showed that they were also more likely to show more intense signs of sadness — perhaps the most vulnerable of expressions — when they thought only pixels were present.

The reason for this success is pretty straightforward. Robots don’t judge. Humans do.

But this development also tells us a few things about our near future. First, while most people are now aware of the fact that robots are going to steal a ton of jobs in the next 20 years, the jobs that most people think are vulnerable are of the blue-collar variety. Ellie is one reason to disavow yourself of this notion.

As a result of this coming replacement, two major issues are soon to arise. The first is economic. There are about 607,000 social workers in America, 93,000 practicing psychologists, and roughly 50,000 psychiatrists. But, well, with Ellie 2.0 in the pipeline, not for long. (It’s also worth noting that these professions generate about $3.5 billion dollars in annual income, which—assuming robo-therapy is much, much cheaper than human-therapy—will also vanish from the economy.)

But the second issue is philosophical, and this is where the uncanniest valley comes back into the picture. Now, for sure, this particular valley is still hypothetical, and thus based on a few assumptions. So let’s drill down a bit.

The first assumption is that social workers, psychologist and psychiatrists are a deep knowledge base, arguably one of our greatest repositories of “about human” information.

Second, we can also assume that Ellie is going to get better and better and better over time—no great stretch since we know all the technologies that combine to make robo-psychologists possible are, as was well-documented in Abundance, accelerating on exponential growth curves. This means that sooner or later, in the psychological version of the Tricorder, we’re going to have an AI that knows us as well as we know ourselves.

Third—and also as a result of this technological acceleration—we can also assume there will soon come a time when an AI can train up a robo-therapist better than a human can—again, no great stretch because all we’re really talking about is access to a huge database of psychological data combined with ultra-accurate pattern recognition, two already possible developments.

But here’s the thing—when you add this up, what you start to realize is that sooner or later robots will know us better than we know ourselves. In Nagel’s terms, we will no longer be the species that understands our species the best. This is the Uncanniest Valley.

And just as the uncanny valley produces disgust, I’m betting that the uncanniest valley produces a nearly unstoppable fear reaction—a brand new kind of mortal terror, the downstream result of what happens when self loses its evolutionarily unparalleled understanding of self.

Perhaps this will be temporary. It’s not hard to imagine that our journey to this valley will be fortuitous. For certain, the better we know ourselves—and it doesn’t really matter where that knowledge comes from—the better we can care for and optimize ourselves.

Yet I think the fear-response produced by this uncanniest valley will have a similar effect to disgust in relationships—that is, this fear will be extremely hard to shake.

But even if I’m wrong, one this for certain, we’re heading to an inflection point almost with an equal—the point in time when we lose a lot more of ourselves, literally, to technology and another reason that life in the 21st century is about to get a lot more Blade Runner.

More human than human? You betcha. Stay tuned.

[Photo credits: Robert Couse-Baker/Flickr, Wikipedia, Steve Jurvetson/Flickr]

Strange Bacteria Dine on Electricity and Link Up to Form Biowires

Singularity Hub
Strange Bacteria Dine on Electricity and Link Up to Form Biowires

electric-bacteria-bio-wires 1

All living organisms need energy. Most animals get their energy by eating other organisms. Plants manufacture energy from sunlight. Now, scientists are finding a strange form of bacterial life that dines on unadulterated electricity.

But the fact the bacteria live on electricity isn’t the weird part.

We all fundamentally live on electricity. Whereas, human metabolism is a complex dance shuttling electrons between sugar and oxygen—the bacteria cut to the chase, eating and excreting electrons. In their research, UCLA scientists Kenneth Nealson and his PhD student Annette Rowe have found eight types of electric bacteria.

“This is huge,” says Nealson. “What it means is that there’s a whole part of the microbial world that we don’t know about.”

To study the bacteria, researchers scoop up seafloor sediment, insert a pair of electrodes, and establish a voltage that differs from the natural voltage in the sediment. When present, the creatures form a current between the electrodes.

The bacteria are not only able to consume electricity, they can also pass it along to each other by forming wires in the sediment. One bacteria reaches out to the next to make a chain along which an electric current travels.

While the bacteria don’t require energy-rich nutrients (like sugars) or other organic carbon molecules to grow, they do need trace elements like phosphorous, sulfur, and nitrogen—this is akin to photosynthetic organisms, substituting electricity for light.

Nealson told us, “In the same way that photosynthetic bacteria or algae need only sunlight—they use the energy of the photons to reduce carbon dioxide to sugars, and go from there—our bacteria use the energy of electrons from the electrode to power the reduction of CO2 to sugar.”

The existence of such creatures is fascinating in its own right. It’s yet another example of life getting creative, learning to live in all kinds of environments. But the bacteria may also prove prove useful in an engineering sense.

It’s thought bacteria might be used to clean up oil spills or other toxic messes without need for an external source of energy. And perhaps much further down the line, such creatures may inspire biomimetic designs for machines built on the nanoscale.

In The Singularity Is Near, Ray Kurzweil notes: “Bacteria, which are natural nanobot-size objects, are able to move, swim, and pump liquids.” Bacteria may serve as blueprints for nanobots manipulating nanoscale systems, or, as Kurzweil later points out, they might be used in fuel cell to produce electricity from sugar.

Electric bacteria might be used to form networks of biowires or show how nanomachines can operate without an onboard power source—that is, like the bacteria themselves, the nanobots might draw power from their immediate environment.

Image Credit: New Scientist/YouTube

Just Reread a Sentence Ten Times? Gaze Tracking Software Tells You, Highlights What You Missed

Singularity Hub
Just Reread a Sentence Ten Times? Gaze Tracking Software Tells You, Highlights What You Missed

gaze-tracking-software 1

How much does your mind wander when it’s supposed to be focused on an important task? You probably don’t need a scientific study to tell you the answer. But just in case, it’s a lot—some 20 to 40 percent of the time. Fear not. Researchers are working on a program that notices when you zone out and gets you to refocus.

Eye- and face-tracking systems are already finding their way into commercial applications in retail and advertising. But in this case, instead of measuring which part of the page interests consumers or how they react to a video, the software looks for signs of disinterest—signs that may indicate you’ve lost focus.

To do this, the software gathers clues from the general pattern of eye movement, like for example, how quickly (or slowly) the eyes move from one word to the next. If the data indicate a lapse in concentration, the program alerts the user and indicates problem sections in the text. Eventually, it might even offer an alternative strategy to learn the content—say a quick video tutorial or infographic.

Might our cars monitor attention levels and warn of a dangerous loss of focus?

Might our cars monitor attention levels and warn of a dangerous loss of focus?

We all learn a little differently, and of course, attention spans vary widely. It’s not uncommon to diagnose students who fall outside educational norms with various attention disorders and prescribe drugs to improve focus and help them through school. Digitally driven adaptive learning may prove a useful companion.

And there are other digital options for increasing concentration.

Focus@will, for example, is a web application that tailors music and ambient sounds to the brain’s regular cycles of concentration (I’m using it right now). The app manipulates variables including musical key, intensity, arrangement, speed, emotional values, and recording style (among others) to optimize the brain’s focus throughout study.

Of course, as much as we’d all like to focus more, occasionally zoning out serves a purpose too. After filling our brain with new information, it sometimes needs a break to creatively synthesize it all.

Such a break might take the form of a long walk or some other “mindless” activity. Sometimes it’s in these unguarded moments—in the shower, on the subway, in the woods, or maybe even zoning out in the middle of a paper or book—that breakthroughs seem to magically resolve themselves in our mind’s eye.

Beyond improving focus, however, such systems might also be used to gather data about our reading habits in general, kind of like FitBit for how we’re consuming information. We might track our average reading speed or how often our attention wanders and use the data to inform new behaviors and measure progress. Or content creators might spot and correct problematic text.

And of course, tracking concentration isn’t only useful for learning new information. Lapses in focus can have catastrophic consequences in other activities like driving or piloting an aircraft. In the future, systems tracking eye movements might alert drivers to pull to the side of the road or pilots to hand over the controls.

Increasingly, the sensors and cameras on our devices aren’t just passively capturing the scenes in front of them—informed by intelligent software, they’re drawing conclusions from what they see and may help us become safer and more effective.

Image Credit: star5112/Flickr; Robert Couse-Baker/Flickr

Software Bot Produces Up To 10,000 Wikipedia Entries Per Day

Singularity Hub
Software Bot Produces Up To 10,000 Wikipedia Entries Per Day

wikipedia_bots

While Internet trolls and members of Congress wage war over edits on Wikipedia, Swedish university administrator Sverker Johansson has spent the last seven years becoming the most prolific author…by a long shot. In fact, he’s responsible for over 2.7 million articles or 8.5% of all the articles in the collection, according to The Wall Street Journal.

And it’s all thanks to a program called Lsjbot.

Johansson’s software collects info from databases on a particular topic then packages it into articles in Swedish and two dialects of Filipino (his wife’s native tongue). Many of the posts focus on innocuous subjects — animal species or town profiles. Yet, the sheer volume of up to 10,000 entries a day has vaulted Johansson and his bot into the top leaderboard position and hence, the spotlight.

The bot’s automatically generated entries are not the beautifully constructed entries one would find within the pages of the Encyclopedia Britannica, for example. Many posts are simply stubs – short fragments of posts that require editing and/or additional information — because the bot is dependent on what’s readily available on the web. Being on Wikipedia, nothing stops someone from refining the stubs and editing them into the beautiful prose that would make any human proud.

Whether Wikipedia purists approve of Lsjbot or not, data scraping software that can mass produce articles is increasingly on the rise.

Just last month, the Associated Press announced that it would be using software called Wordsmith, created by startup Automated Insights, to produce stories on the quarterly corporate earnings from US companies. Since October of 2011, Narrative Science has been automatically generating sports and finance stories on Forbes without much fanfare.

It isn’t just companies getting into the automated content game. Recently, a LA journalist utilized a bot to post a report just three minutes after an earthquake. Another academic, Philip Parker, has created over 100,000 ebooks on Amazon through similar software.

Much of this software employs fairly simple search functions to capture the data and reformat it into articles. In other words, very minimal artificial intelligence. Yet, growing interest in machine learning and natural language processing will inevitably mean that the quality of bot-generated content will only increase.

In the very near future, software-created articles will be indistinguishable from a vast amount of human-produced content. Whether that’s a good or bad thing, you can be sure the Wikipedia article on the subject will be furiously edited over time.

[Photo credit: STML/Flickr]

Is Tech Unemployment Good or Bad?

Singularity Hub
Is Tech Unemployment Good or Bad?

unemployed businessman

Recently, in conversation with Vinod Khosla, Larry Page was asked what he thinks will happen to jobs in the future as technology begins to replace humans.

His response was essentially, “everybody will work fewer hours.”

Similarly, Richard Branson suggested that in the future, companies might choose to hire two part-time people for every one full-time job, employing more people for less time. We could reduce our work weeks to four days and sprinkle more vacations into the year. I guess most people wouldn’t complain.

But what’s going on here?

I often talk about the incredible benefits exponential technologies like machine learning, robotics, and 3D printing will bring to society. However, one major potential concern is that these same technologies will cause widespread unemployment across the board. This is often referred to as technological unemployment.

In fact, one recent study out of the Martin School at Oxford suggests that 47% of our jobs are at high risk of being replaced in the next one to two decades.

Developments in the fields of machine learning and robotics are creating systems that outperform, outpace, and outprice human labor. And the spectrum of jobs that computers can replace is expansive. Everyone, from factory workers and farmers to doctors and lawyers, has their current jobs at risk.

This is an important topic that you should at least be thinking about. It’s also a complicated subject, so my aim here is not to delve into too much detail; instead, I want to provide a little context and argue that now, more so than ever, is the time to be debating what to do next.

A Little History

Did you know that a letter was sent to the president, signed by Nobel laureates and esteemed professors warning him of “imminent large-scale technological unemployment“? The letter describes a “cybernation revolution,” brought about by “the combination of the computer and the automated self-regulating machine,” which, “results in a system of almost unlimited productive capacity which requires progressively less human labor.”

But this letter was sent in 1964to President Lyndon B. Johnson, by a group of concerned scientists 50 years ago!

There are a few ways to interpret this:

Option #1: Given that we haven’t seen a “cybernation revolution” in the last 50 years, we probably shouldn’t be worried about it happening anytime soon. The idea here is that technology continually creates new (and higher functioning jobs) than it displaces, ultimately benefiting humanity.

Option #2: This time it’s different, and 50 years ago we were in the deceptive phase of an exponential development cycle, but now it’s becoming disruptive, and we are right on the knee of the curve. In other words, it’s time to start preparing for (and becoming comfortable with) large-scale, rapid exponential change.

Where do you fall?

If option #1 is correct (like technology gurus Marc Andreessen and Kevin Kelly believe), then we’re in good shape and we’re heading towards a world of abundance with little downside. For example, 50 years ago, financial institutions employed rooms full of bookkeepers who copied ledgers. When computers first hit the scene and took over this function, it was feared that banking jobs would vaporize. But this never happened. Banks today employ more people than ever before.

If, on the other hand, option #2 is correct, then we have some important conversations and decisions ahead of us, and a lot of seriously concerned people.

I recently held a confidential chairman’s forum at Singularity University with a group of CEOs to think through the potential impact of large-scale technological unemployment.

After many conversations, we identified four possible outcomes. Which of these do you believe is correct?

(1) Net Positive Increase in Jobs: Exponential technologies will cause a net positive increase in meaningful, skilled jobs and opportunities.

(2) Loss of Jobs, Society Adapts: Exponential technologies will dramatically decrease the number of jobs, but society will adjust to this new reality by changing norms around work (days or hours per week, etc.), and/or the creation of work in virtual worlds.

(3) Near-Term Loss & Long-Term Gain of Jobs: There will be a near-term loss of jobs; however, in the long-term, exponential technologies will cause a net positive increase in meaningful, skilled jobs and opportunities, perhaps as human more closely integrate and collaborate with technology.

(4) Loss of Jobs/Society Rebels: We’re screwed: There will be a large-scale loss of jobs in both the near and long-term future, and society will not be able to adapt, and this will cause significant anger and revolution.

I’d love to know what you think!

In fact, here’s a 2-question survey on this subject. Would you take a moment and take it?

https://www.surveymonkey.com/s/RJBR5ZV

More importantly, please share this with your social media, friends and family. I’d love the broadest exposure possible to this survey.

I’ll share the results with you and your friends, if you’d like, in the near future.

What to Do Next

My recommendation for you is to do the following: begin to do your own research. Talk with friends, spend some time looking up “technological unemployment” on the Web, read the opinions of vocal thought leaders like Marc Andreesen and Larry Page, and shape your own opinion on the matter.

What you think about this subject? Spread the word and share your thoughts with me and your friends. Let’s elevate this topic in the global conversation.

If you’re a CEO or entrepreneur and would like to take the discussion one step further, I’ll be spending time on this subject (both the threats and opportunity) at my Abundance 360 Summit.

In the meantime here are some blogs and links to kick off your own research:

  • Marc Andreessen’s blog on the subject: LINK
  • Larry, Sergey & Vinod discussing this on video (circa minute 15) here: LINK
  • Fear Not The Coming of Robots: LINK
  • From Poverty to Prosperity, Bill Gates: LINK
  • Two more perspectives: Reich and Evans

I’ll share my personal opinions/thoughts more with you next time when I send over the SurveyMonkey results.

Every weekend I send out a blog like this one with my latest insights on technology. To make sure you never miss one, head to www.AbundanceHub.com to sign up for this and my Abundance blogs.

[Photo credits: out of work businessman courtesy of Shutterstock]

NBA Courtside at Home? Live Action Virtual Reality is Here and Better than Expected

Singularity Hub
NBA Courtside at Home? Live Action Virtual Reality is Here and Better than Expected

jauntVR

William Gibson famously said, “The future is already here – it’s just not evenly distributed.” I’m guessing Gibson was probably referring to the coming world of live action content for the Oculus Rift, and if that’s true he’s exactly right. The future is stuck in a Palo Alto office loft. JauntVR to be exact.

Forget the simulated world of virtual reality video games. The real VR disruption is coming from live action. Do you want courtside seats for the Lakers? Looking to get inside the White House press room? Have you been on stage with your favorite band lately? It’s not a matter of if, but of “HOLY CRAP it’s already here!!”

My journey began when I came across a press release describing some new type of cinematic VR platform, just recently out of stealth mode, who announced $6.8M in funding from a who’s-who in the film industry. Being fully committed that this will be a banner year for the industry, I reached out to the team.

To get a sense of what JauntVR has built, their technology is described as “an end-to-end solution for creating cinematic VR experiences.” Their camera films real world footage from everything surrounding it, and their software stitches those clips together to create a seamless experience. The wrap-around film capture means that you’re able to swivel in any direction to maintain that feeling of “actually being there.”

Rarely is the window into the future as stunningly clear as was for me for the next few minutes inside that Oculus Rift. So what was it like inside? (Keep in mind: writing about the mindshock of live action VR, is quite like trying to share a photograph of your favorite song. Words simply cannot do justice.)

I settled into a swivel chair at the center of a dark surround-sound equipped room, slid on the Rift, and instantly I was no longer in Palo Alto but sitting inches away from a string quartet performing inside a luxurious concert hall. Stunned by the realism (it is real after all!) I began turning in every direction, to discover that behind me I could see the full venue where an audience would sit. Then I understood where I was – right where the conductor would stand. Not a bad seat and with the surround sound so clear, I felt as if I was actually there.

It was the next demo, though, when my jaw finally hit floor. For copyright reasons, I can only say that I was closer to the sights and sounds of a professional sports event than at any time in my life. And this was real footage, not some simulation. Courtside seats in the home are coming much sooner than expected.

To bring it all home, the last bit of footage landed me onstage at an EDM concert next to a world-famous DJ pumping a record-breaking crowd of 300,000 into an adrenaline-fueled hysteria. And there I was, only feet away from the hero himself. Just a stunning experience you’ll have to see to believe.

So what does it all mean?

It means watching the next FIFA World Cup from the field with a better view than the coaches.

It means running with the bulls, worlds away from any angry bullhorns.

It means accepting that Oscar right alongside Leo DiCaprio, whenever it is he decides to win one.

It means keeping your pajamas on, because JauntVR is saving you a front row seat at anything they can get their cameras to.

VR-captureBut as exciting as this sounds, live event ticket sales won’t vanish overnight. Events are a social affair, and live crowds will never and should never be replaced. Even at home, VR will remain (at least for the short term) a companionless substitute for watching with friends. And no pajama’ed soccer fan wants to tune into a World Cup to find empty stadiums. The challenge for events companies becomes inspiring fans to forgo the comforts of home for the in-person experience, and those comforts of home are relying on the fact that they can.

VR won’t replace the real thing, and could actually increase attendance for sports games and concerts. As Spotify and Pandora shifted the music industry to an information service (as VR now appears ready to do to sports/entertainment), fears were that it was dark days ahead for the industry. In reality, more people than ever are showing up to live shows. Billboard conservatively estimates that worldwide a whopping $15 billion is now spent annually on concerts.

What Spotify and Pandora really did was increase exposure for bands looking to add to their fan base. VR could do the same, as an entirely new and mouth-wateringly cool marketing platform, for sports teams seeking out fans in a globalized market. Teenagers far and wide will be tuning in courtside to hang with their favorite superstars.

Still, not everyone may survive the coming disruption. As experiences become an information service, VR could do to the “experience economy” what Netflix did to Blockbuster. Yet it’s unclear what consumer behaviors look like in a post VR-invaded world. As the most visited natural landmark on earth, would consumers favor a VR alternative to Niagara Falls? Surely businesses like the airlines and “Maid of the Mist” (a Niagara Falls boat ride company in business since 1864) should keep watchful eyes on these trends. Businesses in the “experience” industry ought to respect the vaporizing forces of digitization, otherwise risk ending up in the analog graveyard with Blockbuster.

Before the week, I would have never believed that today, someone could actually sit sideline at a professional game. I don’t think I’ve quite recovered from the haze of appreciation for that experience.

VR enthusiasts have been saying for years, “Won’t it be cool when…” Well that when is now, and that now is more amazing than this VR enthusiast could have imagined. Sooner than you think, your jaw too will tumble to the floor – inside the immersion of a live action VR experience.

[Photo credits: JauntVR]

How the Crowd Taught a Robot to Build a LEGO Turtle

Singularity Hub
How the Crowd Taught a Robot to Build a LEGO Turtle

robot-constructs-crowdsourced-turtle

Humans learn by imitation. There’s no predicting what your kid will bring home from school or the park. What if machines could learn like kids? In fact, they can and do. Robots don’t go to the park or school (yet)—but they live in labs and can go online.

University of Washington researchers recently augmented typical in-person imitation learning (where a human shows a robot how to solve a problem) with online crowdsourcing to teach their robots how to build shapes with blocks.

In a paper describing their work, the group says in-person imitation learning together with crowdsourced learning realized better results than in-person imitation alone.

After the robot was taught by 14 volunteers in the lab, the scientists posted questions to Amazon Mechanical Turk. Mechanical Turk distributes simple tasks to thousands of online workers for a small fee per completed task.

The researchers asked users, “How would you make a shape (turtle, person, car, etc.) using these colored blocks?” Their software analyzed hundreds of responses and sorted the best designs by asking participants to rate designs submitted by other users. The program chose the most highly rated responses and built them using physical blocks.

Crowdsourced turtles (above) and imitation images built in blocks by a Gambit robot (below).

Crowdsourced turtles (above) and imitation images built in blocks by a Gambit robot (below).

Crowdsourcing learning for robots could be a powerful technique. The current process involves fewer teachers and can be expensive. But for now, crowdsourcing is likely best combined with other methods and at least some human supervision.

For example, at first, crowdsourced learning seems useful for projects like Google’s self-driving cars. Currently, Google engineers log miles on the road and laboriously catalogue and write code for as many situations as they can. They’re able to anticipate a variety of events, but can’t possibly account for everything.

What if Google’s cars learned to solve rare, low-probability events from the crowd? As it turns out, what works for a robot playing with blocks might be impractical for a multi-ton machine trying to safely navigate city streets. Quality control is crucial.

The authors note in their paper, “Although we demonstrated the benefits of utilizing crowdsourcing, the context of robotic imitation learning, crowdsourcing needs to be used with caution. The quality control of crowdsourcing was non-trivial.”

That said, crowdsourced machine learning could help a robot better sort boxes in a warehouse. And maybe Google’s self-driving cars could use even more closely supervised crowdsourcing. The system queries the crowd, selects what it thinks are the best solutions, and engineers give the final thumbs up.

As robots become commonplace, however, we imagine they might cut humans out of the learning equation entirely—that is, as robots interact with us day to day, they analyze what works and what doesn’t, adjust their programming, and share it with each other. Robots would get more capable the more they interact with the world.

Image Credit: “Accelerating Imitation Learning through Crowdsourcing”/University of Washington

New Super-Black, Light-Absorbing Material Looks Like a Hole in Reality

Singularity Hub
New Super-Black, Light-Absorbing Material Looks Like a Hole in Reality

nasa-carbon-nanotubes-super-black

UK nanotechnology company, Surrey NanoSystems, has created what they say is the darkest material known to man. Vantablack consists of a dense forest of carbon nanotubes—single atom carbon tubes 10,000 times thinner than a human hair—that drinks in 99.96% of all incoming radiation.

vantablack-surrey-nanosystems

Vantablack on aluminum.

First announced last year, the material is a deep, featureless black even when folded and scrunched. “You expect to see the hills and all you can see…it’s like black, like a hole, like there’s nothing there. It just looks so strange,” Ben Jensen, the firm’s chief technical officer, told the Independent.

A number of other groups have been working to make super-black materials from carbon nanotubes in recent years. A prime application for the material is in sensitive optical equipment, like telescopes. A NASA Goddard team, led by John Hagopian, has been developing nanotube materials since 2007.

To make their own super-black material, Hagopian’s group lays down a catalyst layer of iron oxide and then, in an 1,832 degree-Fahrenheit (750 C) oven, they bathe the surface in carbon-enriched gas. The resulting multi-walled carbon nanotubes—nanotubes layered inside one another like Russian nesting dolls—can be grown on titanium, copper, and stainless steel.

NASA hopes to replace the black paint currently used in telescopes to minimize contamination by stray light (up to 40% of incoming light is unusable). Super-black materials ten times darker than the black paint may improve observations of distant galaxies or exoplanets orbiting stars in our own galaxy.

“You could get a better observational efficiency,” Hagopian said last year. “You’re not throwing away 40% of your data.”

What makes Vantablack special? Like NASA’s material, Vantablack can be deposited on three-dimensional surfaces, but it’s blacker than NASA’s super-black. Also, Surrey NanoSystems says they make Vantablack at low temperatures. Hot processes, like Hagopian’s, prevent layering on base materials with low melting points. Vantablack can be deposited on a wider selection of materials.

Further, for use in sensitive optics, especially in space, the material needs to dependably adhere to surfaces. Vantablack degrades very little and can withstand the rigors of launch and other vibrations, thereby reducing the risk of instrument contamination.

There’s no word on cost of the material, however, Surrey NanoSystems is already moving into commercial development. “We are now scaling up production to meet the requirements of our first customers in the defense and space sectors, and have already delivered our first orders,” said Jensen.

Image Credit: Stephanie Getty/NASA Goddard/Flickr; Surrey NanoSystems

Solar PowerCube Provides Electricity, Clean Water, and WiFi in Disaster Zones

Singularity Hub
Solar PowerCube Provides Electricity, Clean Water, and WiFi in Disaster Zones

ecos-powercube-1aaa

Following a major disaster, water, energy, and communications can be in short supply—challenging for residents and relief workers alike. But what if you could provide these necessities using only sunlight? Ecosphere‘s all-in-one solar solution, the Ecos PowerCube, can provide energy, water, satellite communications, and WiFi.

Initially, the PowerCube looks little more than a standard shipping container (10-feet, 20-feet, or 40-feet long). But once in place, the container rolls out an array of solar panels on hydraulic shelves—the increased usable surface area produces 400% more power.

The PowerCube provides an internet connection up to 30 miles away. It can clean water without a source by pulling moisture from the air. The unit provides shelter in a pinch. And of course, its solar panels soak up enough sunlight to generate 15 kW of energy for its core functions, emergency hospitals, sleeping quarters, or command centers.

ecos-powercube-4aa

PowerCube is an interesting concept, but how does it compare to traditional solutions?

If the emergency is weather related, solar power might not be an abundant resource as the tatters of the storm swirl above right after the disaster. Further, 15 kW is only a bit more than a rooftop solar installation, and after other functions, there won’t likely be a full 15 kW of juice available for external power provision at any given time.

A traditional diesel generator, on the other hand, puts out between 600 kW and 1.7 MW of power—at least 40 times more than the PowerCube—day and night, independent of weather. Of course, generators need fuel, but if you’re able to drop off a shipping-container-sized solar plant, you can probably bring along some diesel too.

Although diesel isn’t as clean or safe as solar, it might be more practical for short-term uses in disaster relief. That is, why not hook up the other functions (satellite, WiFi, and water) to a traditional generator? The total footprint would be smaller, and significant extra power could be used to keep local facilities electrified.

Ecosphere suggests their unit might go beyond emergency relief, however, for use in humanitarian settings in developing countries or in the military. No doubt silent power generation would suit covert military operations, but lack of traditional generator levels of power and storage could be a drawback in other uses.

ecos-powercube-2aa

PowerCube might well be a great solution for humanitarian settings in poor, rural communities. But would it be sustainable? We’re reminded of a recent conversation we had with UNICEF in which representatives noted the problem of the “magic box”—a piece of technology that’s ultimately too complicated for its own good. After normal wear and tear, the technology breaks, no one can fix it, and the thing is left to gently rust away.

ecos-powercube-3aa

The PowerCube is a cool, if imperfect all-in-one solution. Only experience in the field will determine its efficacy. It might prove very useful in combination with traditional methods. And the two technologies key to PowerCube’s success—batteries and solar cells—ought to improve in the coming years (faster for solar, a little slower for batteries).

Image Credit: Ecosphere

Humans Aren’t the Pinnacle of Evolution and Consciousness—We’re Only a Rung on the Ladder

Singularity Hub
Humans Aren’t the Pinnacle of Evolution and Consciousness—We’re Only a Rung on the Ladder

universe-comes-to-know-itself 1

In his latest video, host of National Geographic’s Brain Games and techno-poet, Jason Silva, explores the universe’s tendency to self-organize. Biology, he says, seems to have agency and directionality toward greater complexity, and humans are the peak.

“It’s like human beings seem to be the cutting edge,” Silva says. “The evolutionary pinnacle of self-awareness becoming aware of its becoming.”

I know Silva isn’t saying evolution ends with humans in our current form. He thinks technology is driving evolution at an accelerating pace. And indeed, the video’s opening quote from Kevin Kelly is far from human-centric, “The arc of complexity and open-ended creation in the last four billion years is nothing compared to what lies ahead.”

But the line about humans being the “evolutionary pinnacle” reminded me of a trap we’ve fallen into time and again—the temptation to place ourselves at the center of all things. We once believed the cosmos revolved around the Earth. Now, we know the Earth is a vanishingly tiny fragment of metal and rock revolving around an average yellow star.

The solar system is neither unique nor centrally located in the galaxy. We’re on the outskirts of the Milky Way—one of hundreds of billions of galaxies.

If we now know our place in space isn’t at all special, the same may be said of our place in time and on the evolutionary ladder. Humans are perhaps the first rung to develop consciousness (on Earth), but by no means will the process end with us.

In a recent interview, Cambridge’s Martin Rees put human evolution in context as only a cosmologist can. Rees says most of us are probably aware that humans are the result of four billion years of evolution—but we tend to think we’re the apex of the process.

Most folks have little notion of what he calls the “far future.” Astronomers, on the other hand, know that the sun is middle-aged and that the Earth has at least as much life ahead of it as it has behind. The universe itself may have an infinite future. We’re perhaps only halfway (or less) “in the emergence of ever greater complexity.”

“Any creatures who will be alive to witness the death of the sun won’t be human—they could be as different from us as we are from protozoa. Indeed future evolution is going to take place not on the Darwinian time scale, of natural selection, but on the technology time scale, because we’re obtaining the capacity to modify the genome.”

Add accelerating evolutionary processes to cosmological deep time, and a future when intelligence has evolved beyond humans, indeed, a future far surpassing even our wildest guesses becomes an inevitability—if our descendants can make it that far.

Image Credit: TestTube (“Shots of Awe”)/YouTube