August 2014
Mon Tue Wed Thu Fri Sat Sun
« Jul   Sep »
 123
45678910
11121314151617
18192021222324
25262728293031

Month August 2014

This Week’s Awesome Stories from Around the Web

Singularity Hub
This Week’s Awesome Stories from Around the Web

climate-change

What’s the most gripping, mind-bending story you’ve read this week? The Hub team has put together the week’s most intriguing stories from around the web. Did we miss anything? If so, add it to the comments.

Singularity or Transhumanism: What Word Should We Use to Discuss the Future? | Slate
“While there is overlap, each name represents a unique camp of thought, strategy, and possible historical outcome for the people pushing their vision of the future. Whatever wins out will be the buzzword that both the public and history will embrace as we continue to move into a future rife with uncertainty and risk…”

Journalism and the internet: Is it the best of times? No — but it’s not the worst of times either | GigaOM
Is there a lot of noise and low-quality writing on the internet? Definitely. Does much of it come from sites that claim to be doing journalism? You bet. Is any of this unique to the internet age? Not even close. 

Inside Google’s Secret Drone-Delivery Program | The Atlantic
“The company’s framework for societal transformation has been conditioned by the relentless decrease in cost and increase in performance of computers. They believe order-of-magnitude changes can happen quickly because they’ve seen and participated in both the rise of the commercial web and the astonishing growth of mobile computing.”

​Climate Change Has an Outrage Problem | VICE
“When scientists are scared, you know shit is getting real.”

Looking to the Future of Data Science | The New York Times
“Mr. Etzioni is leading a growing team of 30 researchers that is working on systems that move from data to knowledge to theories, and then can reason. The test, he said, is: ‘Does it combine things it knows to draw conclusions?” This is the step from correlation, probabilities and prediction to a computer system that can understand, in its way.’”

Basic Income Is Practical Today…Necessary Soon | Hawkins Ventures
“Workforce automation will drive away the need for humans to work, allowing us to enter a new era of post-abundance society with Basic Income. We are going to discuss why this is something that is possible today, easy to accomplish in 20 years, and necessary in 40 years.”

How China’s mobile ecosystem is different from the West | The Next Web
“Such a huge smartphone market is a double-edged sword — it means a large pool of potential users for app developers, but it’s also harder to figure out what works and what doesn’t.”

[Image credit: Climate Change Concept courtesy of Shutterstock]

Advertisements

Can You 3D Print Emotions? New “Love Project” Uses Biometric Sensors to Create Household Objects

Singularity Hub
Can You 3D Print Emotions? New “Love Project” Uses Biometric Sensors to Create Household Objects

requena-1

Everyone has knick-knacks of sentimental value around their home, but what if your emotions could actually be shaped into household things?

A project recently unveiled at the Sao Paulo Design Weekend turns feelings of love into physical objects using 3D printing and biometric sensors. “Each product is unique and contains the most intimate emotions of the participants’ love stories,” explains designer Guto Requena.

As users recount the greatest love stories of their lives, sensors track heart rate, voice inflection, and brain activity. Data from their physical and emotional responses are collected and interpreted in an interface, transforming the various inputs into a single output.

The real time visualization of the data is modeled using a system of particles, where voice data determines particle velocity, heart rate controls the thickness of the particles, and the data from brain waves causes the particles to repel or attract each other. To shape these particles into the form of an everyday object such as a lamp, fruit bowl or vase, a grid of forces guides the particles as they flow along their course.

The final designs are then sent to a 3D printer, which can print in a variety of materials including thermoplastics, glass, ceramic or metal.

requena-2

While this method is both creative and intriguing, are any of the produced objects meaningful if the viewer is unable to interpret the emotion from which the object was created?

Looking at the objects produced, it’s difficult to imagine them eliciting similar feelings in viewers as were felt by those recounting their tales of love. Furthermore, the data that produced the objects cannot be extracted, so that information is lost.

requena-3

Other groups have begun to experiment with infusing printed objects with interactivity, where the object provides the user with information. Techniques to convert digital audio files into 3D printable audio records have been developed as well as a way to play a ‘sound bite’ on a 3D printed object by running a fingernail or credit card along printed ridges which produce a sound.

The “Love Project” is an interesting experiment that successfully includes the end user in the process of creating objects of meaning while also democratizing and demystifying the use of interactive digital technologies, yet it’s a stretch to think that the aesthetics of the objects themselves can help understand the mysterious human emotion of love.

What would be truly exciting is if we were able to transform intangible emotions into data, that data into a physical object and interact with the object in a way which brings insight and meaning and new way of understanding and visualizing our emotional states. With designers like Requena and Neri Oxman finding new ways to integrate art and 3D printing, we’re likely to see even more exciting projects at the interface of technology and expression on the horizon.

[Photo credits: Guto Requena]

Skully Motorcycle Helmet Not Quite Iron Man, But a Taste of Our Augmented Reality Future

Singularity Hub
Skully Motorcycle Helmet Not Quite Iron Man, But a Taste of Our Augmented Reality Future

skully-hud-motorcyle-helmet-31

If Tony Stark designed a motorcycle helmet, it might look a little like Skully.

Sleek black (or white) with an aerodynamic fin. A visor that changes tint at the touch of a finger. A rear 180-degree camera surveying the road behind and beside the rider and streaming the video through a display in the front. Voice command? Of course.

Riders can check blind spots or get directions without turning their head or taking their eyes off the road. A connected smartphone sends incoming calls to the helmet’s earphones or makes outgoing calls by request. Tunes? Yes, of course. Just ask.

Skully looks like the future. But there’s nothing new or special about what the helmet can do. Pretty much everything on offer is old hat.

High-definition action video is owned by Go Pro. Navigation and directions showed up in cars a long, long time ago. Voice recognition is standard in Siri, Google Search, and other applications. Bluetooth helmets already connect to phones.

The novel, Iron Man-like feature is that Skully includes all this and ties it together with voice recognition and a head-up-display. If the design is right, it’ll be awesome.

And the Indiegogo community seems to agree. The campaign was fastest to reach a million in funding—and is set to go another 21 days. Of course, challenges remain. Undoubtedly, addressing safety concerns will be foremost on their to-do list.

skully-hud-motorcyle-helmet-2

Skully may develop into a must-have (regularly copied) motorcycle helmet. Or not. Never mind playing that game. Just take a moment to admire what’s on offer here.

The Iron Man helmet? Much of the tech to make it is already here. And some of the things that aren’t quite here, may be soon.

For example, big electronics firms have been gunning for flexible, ultra-thin, transparent displays. Why not make a motorcycle helmet where the entire visor is a curved, transparent OLED screen overlaying key information onto the road ahead?

Add a few head tracking sensors, like those used to make immersive digital worlds in the Oculus Rift, and a future helmet could make digital markers (directions or signs) look like a native part of the road or scenery—true augmented reality.

And while Stark’s friendly AI doesn’t exist yet, natural language processing is improving. At MIT Technology Review’s Digital Summit, Tim Tuttle, CEO and founder of Expect Labs, noted a virtuous cycle in voice recognition and machine learning—the more people use them, the better they get, and the better they get, the more people use them.

Tuttle went on to say that voice recognition accuracy had improved 30% in just 18 months—more than in the whole prior decade. Add that to machine learning efforts at Google or IBM or Facebook, and future digital assistants will not only understand what we’re saying, they’ll know what we mean and anticipate our needs.

Computer vision, too, is improving. On the back of sophisticated methods made feasible by the rapidly decreasing cost of computing power, this year’s Large Scale Visual Recognition Challenge witnessed a doubling in accuracy—the five-year-old contest has seen two such radical improvements, first in 2012 and now in 2014.

A motorcycle helmet that can look through a video camera and converse with the rider might see and note things the rider hasn’t noticed—a nearby animal, biker, or pedestrian; a traffic sign; a car in their blind spot; a green or red light.

All this technology might seem more distracting than helpful, but that’s only if the design isn’t dialed. If it works as it should, you’ll notice it less. But it won’t be that way right away. The early internet was a riot of pop-ups and distracting design. It still has its wild areas, but the overall experience has improved leaps and bounds.

We expect something similar will happen with wearable technologies—a little distracting early, but dialed in over the years. In the context of improving displays, sensors, and AI, Skully is a tantalizing hint at the powerful wearable interfaces yet to come.

Image Credit: Skully

How to Plan a Revolution (an Excerpt from Abundance)

Singularity Hub
How to Plan a Revolution (an Excerpt from Abundance)

abundanceHow did a simple Facebook group mobilize 12 million people in 40 countries in just one month?

That’s exactly what Oscar Morales accomplished when, in 2008, he created a Facebook group called A Million Voices Against FARC.

I wrote about his story in Abundance — the excerpt follows.

You’ve heard me say that technology is a resource-liberating force, check out the human-resources liberated by this!

 

One Million Voices, an excerpt from Abundance

In 2004, while doing graduate work as a Rhodes scholar at Oxford University, Jared Cohen decided that he wanted to visit Iran. Since Iran’s stance against the United States is based partially on US support of Israel, Cohen — a Jewish American — didn’t think he stood much chance of getting a visa. His friends told him not to bother applying. Experts told him he was wasting his time. But after four months and sixteen trips to the Iranian Embassy in London, he received permission to travel to, as Cohen later recounted in his book Children of Jihad: A Young American’s Travels Among the Youth of the Middle East, “a country that President Bush had less than two years ago labeled as one of the three members of the ‘axis of evil.’”

The purpose of Cohen’s trip was to expand his knowledge of international relations. He wanted to interview opposition leaders, government officials, and other reformers, but after successful conversations with the Iranian vice president and several members of the opposition, the government’s Revolutionary Guard sauntered into his hotel room late one night, found his potential interview list, and foiled his plans. But rather than leaving Iran and flying back to England defeated, Cohen decided to explore the country and see what kinds of friends he made along the way.

He made plenty of friends, most of them young. Two-thirds of Iran is under the age of thirty. Cohen dubbed them “the real opposition,” a massive, not-especially-dogmatic youth movement hungry for Western culture and suffocating under the current regime. He also discovered that technology was allowing this movement to flourish — a lesson that crystallized for him at a busy intersection in downtown Shiraz, where Cohen noticed a half dozen teens and twentysomethings leaning up against the sides of buildings, staring at their cell phones.

He asked one boy what was going on and was told this was the spot everyone came to use Bluetooth to connect to the Internet.

“Aren’t you worried?” asked Cohen. “You’re doing this out in the open. Aren’t you worried you might get caught?”

The boy shook his head no. “Nobody over thirty knows what Bluetooth is.”

That was when it hit him: the digital divide had become the generation gap, and this, Cohen realized, opened a window of opportunity. In countries where free speech was wishful thinking, folks with basic technological savvy suddenly had access to a private communication network. As people under thirty constitute a majority in the Muslim world, Cohen came to believe that technology could help them nurture an identity not based on radical violence.

These ideas found a welcome home in the US State Department. When Cohen was twenty-four years old, then Secretary of State Condoleezza Rice hired him as the youngest member of her policy planning staff. He was still on her staff a few years later when strange reports about massive anti-FARC protests started trickling in. The FARC, or Revolutionary Armed Force of Colombia, a forty-year-old Colombia-based Marxist-Leninist insurgency group, had long made its living on terrorism, drugs, arms dealing, and kidnapping. Bridges were blown up, planes were blown up, towns were shot to hell. Between 1999 and 2007, the FARC controlled 40 percent of Colombia. Hostage taking had become so common that by early 2008, seven hundred people were being held, including Colombian presidential candidate Íngrid Betancourt — who’d been kidnapped during the 2002 campaign. But suddenly, and seemingly out of nowhere, on February 5, 2008, in cities all over the world, twelve million people poured into the streets, protesting the rebels and demanding the release of hostages.

Nobody at State quite understood what was going on. The protestors appeared spontaneously. They appeared to be leaderless. But the gathering seemed to have been somehow coordinated through the Internet. Since Cohen was the youngest guy around — the one who supposedly “spoke” technology — he was told to figure it out. In trying to do that, Cohen discovered that a Colombian computer engineer named Oscar Morales might have been responsible. “So I cold-called the guy,” recounts Cohen. “Hi. How are you? Can you tell me how you did this?”

What had Morales done to bring millions of people into the streets in a country where, for decades, anyone who said anything against the FARC wound up kidnapped or dead or worse? He’d created a Facebook group. He called it A Million Voices Against FARC. Across the page, he typed, in all capital letters, four simple pleas: “NO MORE KIDNAPPING, NO MORE LIES, NO MORE DEATH, NO MORE FARC.”

“At the time, I didn’t care if only five people joined me,” said Morales. “What I really wanted to do was stand up and create a precedent: we young people are no longer tolerant of terrorism and kidnapping.”

Morales finished building his Facebook page around three in the morning on January 4, 2008, then went to bed. When he woke up twelve hours later, the group had 1,500 members. A day later it was 4,000. By day three, 8,000. Then things got really exponential. At the end of the first week, he was up to 100,000 members. This was about the time that Morales and his friends decided that it was time to step out of the virtual world and into the real one.

Only one month later, with the help of 400,000 volunteers, A Million Voices mobilized some 12 million people in two hundred cities in forty countries, with 1.5 million taking to the streets of Bogotá alone. So much publicity was generated by these protests that news of them penetrated deep into FARC-held territory, where news didn’t often penetrate. “When FARC soldiers heard about how many people were against them,” says Cohen, “they realized the war had turned. As a result, there was a massive wave of demilitarization.”

Cohen was fascinated. He flew down to Colombia to meet with Morales. What surprised him most was the structure of the organization. “Everything I saw had the structure of a real nongovernmental organization — but there was no NGO. There was the Internet. You had followers instead of members, volunteers instead of paid staff. But this guy and his Facebook friends helped take down the FARC.” For Cohen and the rest of the State Department, it was something of a watershed moment. “It was the first time we grasped the importance of social platforms like Facebook and the impact they could have on youth empowerment.”

This was also about the time that Cohen decided technology needed to be a fundamental part of US foreign policy. He found willing allies in the Obama administration. Secretary of State Clinton had made the strategic use of technology, which she termed “twenty-first-century statecraft,” a top priority. “We find ourselves living in a moment in human history when we have the potential to engage in these new and innovative forms of diplomacy,” said Secretary Clinton, “and to also use them to help individuals empower their development.”

Toward this end, Cohen had become increasingly concerned about the gap between local challenges in developing nations and the people who made the high-tech tools of the twenty-first century. So, wearing his State Department hat, he started bringing technology executives to the Middle East, primarily to Iraq. Among those invited were Twitter founder Jack Dorsey. Six months after that trip, when Iranian postelection protestors overran the streets of Tehran, and a government news blackout threatened all traditional lines of communication, Cohen called Dorsey and asked him to postpone a routine maintenance shutdown of the Twitter site. And the rest, as they say, is history.

Twitter, of course, soon became the only available pipeline to the outside world, and while the Twitter revolution didn’t topple the Iranian government, in combination with Morales’s efforts and other Internet-based activism campaigns, all of these events paved the way for what we would soon be calling the Arab Spring.

“It didn’t happen intentionally,” says Cohen. “Bluetooth was a technology invented so people could talk and drive — nobody who built it expected their peer-to-peer network would be used to get around an oppressive regime. But the message of the events of the past few years are clear: modern information and communication technologies are the greatest tools for self-empowerment we’ve ever seen.”

 

Please send your friends and family to AbundanceHub.com to sign up for these blogs — this is all about surrounding yourself with abundance-minded thinkers. And if you want my personal coaching on these topics, consider joining my Abundance 360 membership program for entrepreneurs.

[Image credits: courtesy of diamandis.com and Shutterstock]

Unlocking the Mystery of Limb Regeneration: Genes for Lizard Tail Regrowth Determined

Singularity Hub
Unlocking the Mystery of Limb Regeneration: Genes for Lizard Tail Regrowth Determined

genes regeneration

For people who’ve lost a limb, advances in materials and 3D printing have produced a slew of new prosthetics that deliver greater mobility, custom fitting, and sleek designs. Yet the ability to completely regrow a lost limb remains daunting, despite the growing research on limb regeneration in reptiles, amphibians, and fish.

Now a team led by researchers at Arizona State University have take a significant step in understanding the process of tail regeneration in green anole lizards.

tail-regeneration

[Image from PLOS ONE]

When lizards lose their tails, at least 326 genes (302 of which are also in humans) are activated over the course of regeneration. These genes are involved in the process of embryonic development, would healing, and hormone response. Additionally, genes triggered in the regeneration of a lizard’s tail are in the ‘Wnt pathway’, which also controls stem cells in organs such as the brain and blood vessels.

“Lizards form a complex regenerating structure with cells growing into tissues at a number of sites along the tail,” said co-author and graduate student Elizabeth Hutchins.

After the tail has fallen off, a scab forms on the wound and cells begin to divide underneath it. Satellite cells (a form of stem cell) regrow muscle tissue as new skin, blood vessels, and cartilage regenerate as well.

After the 60 days of regrowth, the new tail isn’t identical to the original. Instead of a spine, a hollow cartilage tube now makes up the internal structure and the muscle groups are distinctly different.

The results were published recently in PLOS ONE.

While complex, the hope of course is that by unraveling the mechanism of tail regeneration, a method for regenerating human limbs might be revealed.

“Lizards basically share the same toolbox of genes as humans.” stated Professor Kenro Kusumi, who led the study. “[They] are the most closely-related animals to humans that can regenerate entire appendages.” He added, “By following the genetic recipe for regeneration that is found in lizards, and then harnessing those same genes in human cells, it may be possible to regrow new cartilage, muscle or even spinal cord in the future.”

While the reality of human limb regeneration is in the distant future, prosthetics will likely continue to become more advanced and smarter as they become less of a medical device and more of a cybernetic enhancement.

With enough success, the research on limb regeneration might ultimately prove more useful in the generation of new kinds of limbs for people, a future that even sci-fi authors struggle to imagine.

[Image credits: PLOS ONEAnole lizard courtesy of Shutterstock]

Steve Jobs, Larry Page And Rush Limbaugh Walk Into A Bar: A Look At The Future of Truth

Singularity Hub
Steve Jobs, Larry Page And Rush Limbaugh Walk Into A Bar: A Look At The Future of Truth

rule of three

This is a tale of memory, truth, technology, and, well, the future of humanity—but it starts in high school.

If you went to high school in America, there is a pretty good chance you learned to write essays using the dreaded five paragraph method. For those who don’t remember, the structure is this: Introductory paragraph (wherein you lay out your thesis), followed by three supporting graphs (each one making a different yet complimentary supporting argument), finished with a conclusion (essentially your introduction restated and a final conclusion drawn).

What I want to point out here is the amount of data being offered up. While it’s called a five paragraph essay, the argument itself hinges on three main data points. Three core ideas. Because of this, the five paragraph essay is also known as the “hamburger essay” or “one, three, one,” or, occasionally, a “three-tier essay.”

Ever wonder why? Why three tiers? Why five paragraphs? Seriously? Generations of Americans have been taught to write this way. If, as the author David Foster Wallace, so ironically pointed out, the purpose of an education is to teach students how to think, why exactly are we teaching them to think this way?

The answer lies in working memory, our technical term for the part of your brain (roughly: frontal cortex, parietal cortex, anterior cingulate, and basal ganglia) that holds the information currently active in consciousness—that is, the things you are actually aware of, the things you can actually think about.

For example, when you ask someone for their email address, when they answer, it is working memory that holds onto that answer long enough for you to write it down. In computer terms, your working memory is your RAM. But—the most important point here—it is also an extremely limited bit of RAM.

In 1956, Harvard cognitive psychologist George Miller published what has become one of the most famous papers in psychology: “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.” Miller’s discovery was that working memory has a limit of roughly seven items. That’s it. That’s the most stuff we can hold in our consciousness at once. That’s why phone numbers are seven digits long—any longer and we would have trouble remembering them.

But this magic number seven is actually misleading. It’s not that we can’t hold seven items in consciousness at once, it’s that we usually don’t. In fact, in thousands of follow up studies, most researchers found that we usually hold about three or four items in consciousness at once.

And this brings us back to the five paragraph essay.

Why five paragraphs? Well, because, when considering an argument, we can usually only hold onto just three or four ideas at once. Thus the structure of the essay: one introductory chunk that introduces a thesis, three supporting notions that back up the thesis, then a reiteration and, perhaps, slight extension of the thesis. In other words, the five paragraph essay is customized for the brain’s internal processing limits—it’s built to work with working memory.

Of course, it’s not just five paragraph essays. One of the most famous rules in writing, speaking and music is the “Rule of Three.” The rule is that concepts or ideas presented in threes are inherently more interesting, more enjoyable and more memorable.

This is why the Declaration of Independence talks about “Life, Liberty and the Pursuit of Happiness.” It’s why we have three wise men and three blind mice and three musketeers. It’s “Blood, Sweat and Tears” and “Earth, Wind and Fire” and “Sex, Drugs and Rock-and-Roll.”

A couple years ago, Carmine Gallo wrote a great blog talking about how uber-presenter Steve Jobs relied on the Rule of Three. The 2011 iPad 2 was described as “thinner, lighter, faster,” while the 2007 iPhone was made up of three new products: “a new iPod, a phone and an internet communications device.”

The rule of three brings us to an overlooked facet of epistemology, our fancy word for the study of truth. In this case, we’re not talking about capital “T” truth, the truth of philosophers, rather small “t” truth, practical truth, truth for the weary trenches of this long life.

Consider that every second of every day our senses are bombarded with information. This is the data that underpins our decisions, our actions, our way. As a result, every bit that comes in must be evaluated. Is this information accurate? Am I being deceived? Am I misinterpreting something? Do I have enough information to make this judgment call?

The brain is endless trying to decide if the information we’re receiving is valid enough to act upon. And this is what I mean by small “t” practical truth—truth that is valid enough to act upon.

My point here is that while we don’t  think much about epistemology consciously, subconsciously we think about it constantly. Every tidbit of salient information that enters our brain is subjected to a rigorous truth detection process. It’s a fundamental property of being human. But this process is also limited by the limits of our working memory. Think about that five paragraph essay. The goal of the essay is to make a convincing argument. Well, what is a convincing argument? It’s something that changes what we believe, it establishes a new truth. And how does this happen? By giving our truth detection system three facts to work with, by abiding by the rules of working memory.

And this brings us to technology. We have, as has been well-documented in this blog, arrived at the age of the cyborg. Humans and machines are beginning to merge. We already have soldiers returning to combat wearing bionic limbs, paraplegics able to move computer cursors with their minds, and a whole host of artificial senses (cochlear implants being only one example) to choose from.

We also have Google Glass—the internet in your eyeglasses. But it’s not hard to imagine that soon the glasses will give way to contact lens and, eventually, to brain implants. Larry Page has spoken extensively about his dream of putting Google in the brain.Ray Kurzweil, in his recent TED talk: “Get ready for hybrid thinking,” explains it like this: “So you’ll be walking along, and Google will pop up and say, “You know, Mary, you expressed concern to me a month ago that your glutathione supplement wasn’t getting past the blood-brain barrier. Well, new research just came out thirteen seconds ago that shows a whole new approach to that and a new way to take glutathione. Let me summarize it for you.”

But here’s the question—will that summary contain three items? Will it be built to work with working memory? Sure, in the beginning, perhaps. But soon we’ll be able to augment our working memory, boosting the brains internal processing limit and, by extension,  our truth detection capabilities.

This fact raises some very interesting questions. Consider the state of modern media. Consider the dominance of the blowhards, the exaggerators, the one’s who are going to “tell it like it is” but don’t. In other words, consider the likes of Rush Limbaugh.

Today, our airwaves are dominated by such blather—liberal, conservative or otherwise. But what happens when our working memory can hold the whole argument, can fact check in real time, can use cloud-based super-computing capabilities to massively extend what has forever been an exceptionally limited truth detection capacity. Imagine, what happens when augmented cognition allows the rule of three to become the rule of three thousand, three million, three billion. We’re not just resetting a few parameters on our internal lie detectors, we’re making the jump to light-speed, we’re forever changing the nature of  truth, and, as a result, forever changing the fundamental nature of our reality.

So say goodbye to the likes of Rush Limbaugh. And say goodbye to most everything else as well.

For all of human existence the way we live in this world has been shaped by very fundamental properties in the brain—properties like bandwidth of our truth detection system. But not for long. Oh yeah, the future is coming for us.

 

Sci-Fi Short Restitution Explores Whether Humans Are Ethically Ready for Cloning’s Consequences

Singularity Hub
Sci-Fi Short Restitution Explores Whether Humans Are Ethically Ready for Cloning’s Consequences

restitution-banner

Among the spectrum of technological innovations that are potentially forthcoming, human cloning is among the most debated and ethically ambiguous. In his award-winning sci-fi short, Restitution, writer/director Justin Miller explores human cloning and the lengths a broken family will go to to feel whole again:

Architect and workaholic Preston Sanders struggles to reconcile with his wife Susan after the recent death of their oldest child. Their relationship is further strained when Preston discovers that his wife has resorted to an unconventional coping mechanism: cloning their youngest son.

While weighed down by some wooden dialogue, this film shines when it shows rather than tells. The clever computing technology Preston uses to work is seamlessly integrated, suggesting that in the future, technology may be so intuitive, it’ll be practically invisible.

The short’s ending reflects the difficult situation we contend with even now: with access to extraordinary advanced technologies, have our emotional and rational abilities advanced at the same rate?

Enjoy your Saturday Singularity Cinema!

Your Legacy: Getting Off This Rock

Singularity Hub
Your Legacy: Getting Off This Rock

Apollo 11

We just celebrated the 45th anniversary of the Apollo 11 Moon landing.

The fact that we went to the Moon with 1960s technology is extraordinary.

The fact that we never went back is shameful.

Should we send another mission to the Moon? Absolutely.

But it should be a private effort — incentivized by government, but not carried out by the government.

And it should be part of humanity’s expansion to Mars and the near-Earth asteroids as well.

Thousands of years from now, it will be these next few decades that are remembered as the moment in time when the human race became a multi-planetary species.

You are alive during these times and it’s part of your legacy.

But this time, when we go back to the Moon, it won’t be with an Apollo-style program.

Missions this complex now require the kind of cost efficiencies and risk mindset found only in today’s commercial industries and entrepreneurial risk takers.

To be affordable and successful, these missions need to use the accelerating (exponential) technologies we are developing today in our labs and commercial companies.

Unfortunately, our traditional NASA approach was to use 20-year-old stuff — or, in other words, only use technology that has been proven to work time and time again.

Did you know that Curiosity — the pinnacle of our Mars exploration program roving around the surface of Mars today — is using a PowerPC processor similar to that in your 1997 PowerBook G3 Laptop… 17 years ago?

The other challenge with our traditional government space programs is their “start-stop-start-stop-CANCEL” cycle. The biggest of programs take a decade to execute (and thereby span several election cycles). As such, time and time again we’ve seen the most audacious government ventures canceled as Democrats scrap Republican initiatives, and Republicans sideline Democratic programs. Consequently, nothing gets accomplished.

It is only with a commercial mindset and commercial technologies (supported by government incentives) that we will achieve the long-term exploration, commercialization and industrialization of space.

The systems pioneered by a company like SpaceX, with its Falcon 9 rocket and Dragon2 spacecraft (which allows for propulsive landing), is a perfect example of what we need to go back to the Moon and beyond to Mars.

mars curiosity

By the way, have you ever heard Elon speak about his plans for Mars? He’s publicly committed to providing round-trip human transport to the Red Planet for $500,000 per person in about 15 years.

I for one, wouldn’t bet against him.

At the same time, my own company Planetary Resources has a team of 40 engineers up in Redmond, Washington building a new generation of autonomous ‘space drones’ called the Arkyd-200 and Arkyd-300 spacecraft. These are drones designed to find and prospect near-Earth asteroids for strategic metals and rocket fuel (hydrogen and oxygen).

Finally, I’m very proud of our Google Lunar XPRIZE, which has offered $30M (of Google’s money) in purses, plus up to $30 million in additional NASA contracts, to the first private team to build a robot, land it on the Moon, send back photos and videos, and rover (or hop) 500 meters.

In success, all of these commercial projects and initiatives will spark the creation of a cottage industry of exploration companies that will help bring down the cost of accessing the Moon, Mars and the asteroids by 50-fold.

These next few decades represent the window in time when the human race is moving irreversibly off the Earth. Thousands of years from now, when humanity looks back, it will be our generation who took the bold steps beyond the bounds of Earth.

Konstantin E. Tsiolkovsky, the Russian scientist considered the father of modern-day cosmonautics, famously said, “Earth is the cradle of humanity, but one cannot remain in the cradle forever.” It’s time for us to get out of the cradle and start exploring the boundless resources of space.

All of this is being made possible by exponential technologies. Ultimately we are demonetizing and democratizing access to space and near-Earth resources.

If you’d like to learn more about how this is all happening, considering joining this year at Abundance 360.

In the meantime, when you look up at the Moon at night, know that you and your children will actually have the opportunity to travel there one day soon.

[Photo credits: NASA]

Every weekend I send out a “Tech Blog” like this one. If you want to sign up, go to www.AbundanceHub.com and sign up for this and my Abundance blogs. And if you want my personal coaching on how to similarly drive industry breakthroughs, consider joining my Abundance 360 membership program for entrepreneurs.

What We’re Reading This Week

Singularity Hub
What We’re Reading This Week

omote-banner

It’s Friday and that means it’s time to share stories and tech that we’ve been reading, thinking about, and passing round within the Singularity Hub team this week:

Omote: Real-time Face Tracking and Projection Mapping | Vimeo
“Project Omote is a collaboration between Japanese projection mapping specialist Nobumichi Asai, makeup artist Hiroto Kuwahara and French digital image engineer Paul Lacroix.”

One way to tell how rich a country is: Look at its profusion of Android phones | Quartz
“The firm crunched the numbers and found that a country’s GDP per capita directly correlates to the level of fragmentation in the market.”

Why the Public Library beats Amazon–For Now | The Wall Street Journal
“Publishers have come to see libraries not only as a source of income, but also as a marketing vehicle. Since the Internet has killed off so many bookstores, libraries have become de facto showrooms for discovering books.”

Inside The World’s Most Intriguing (And Probably Only) Futurist Bar | Co.Exist
“‘We knew we wanted a chalkboard robot, and it turns out there is a guy that makes them.’”

The Future Of College? | The Atlantic
“I felt my attention snapped back to the narrow issue at hand, because I had to answer a quiz question or articulate a position. I was forced, in effect, to learn. If this was the education of the future, it seemed vaguely fascistic. Good, but fascistic.”

We’re bad judges, better teachers, and video games are pretty good for us | SciShow/YouTube
“So apparently if you want to learn just think about how to teach.”

[Image credit: Omote/Vimeo]

Thousand-Robot Swarm Hints at Future Car, Drone, Even Nanobot Collectives

Singularity Hub
Thousand-Robot Swarm Hints at Future Car, Drone, Even Nanobot Collectives

kilobot-regiment-swarm-robot

When you think nanorobot, you don’t think just one. Or ten. You think millions or billions. Huge swarms of nanobots may work in concert with each other to accomplish tasks on tiny scales, perhaps in the human body, or more radically, to form larger robots, each nanobot functioning like a mechanical cell or 3D pixel.

Although we don’t have practical nanobots yet, we can work on the software that may coordinate them—and in fact, that’s exactly what a Harvard group has been up to.

Working out of Harvard’s School of Engineering and Applied Sciences (SEAS) and the Wyss Institute, researchers Radhika Nagpal, Michael Rubenstein, and Alejandro Cornejo first unveiled their quarter-sized Kilobots in 2011 when 25 of them were shown capable of performing synchronized actions.

The name Kilobot refers to the goal of building collectives of a thousand coordinated robots. That initial squad of 25 grew to 100 last year and now, most recently, to 1024. In a new video, the robots autonomously assume shapes with no more guidance than the original input from the researchers (e.g., form the letter ‘K’).

Each $14 robot is exceedingly simple, just two vibrating motors on four spindly legs.

But we know from a flock of swallows or an ant colony that complex patterns can arise from simple individual behaviors. In this case, four Kilobots mark the center of a grid. Then, by hopping along the edge of the group and noting the relative distance to the center and each other with infrared transmitters and receivers, the bots shuffle into place.

kilobot-swarm-robot-shapesFrom these basic abilities they can form shapes—letters, a wrench, a starfish.

“Biological collectives involve enormous numbers of cooperating entities—whether you think of cells or insects or animals—that together accomplish a single task that is a magnitude beyond the scale of any individual,” said  Michael Rubenstein, lead author of the group’s recent paper in the journal Science and research associate at Harvard SEAS and the Wyss Institute.

Algorithms can be written and tested in computer models, but having real world robots to test them on is key. In practice the robots need to self-correct for problems like traffic jams and broken or wayward bots. Cost is also often a limiting factor. According to Harvard Gazette, only few robot swarms have so far surpassed 100 members.

In the future, we may see more and more machines behaving as collectives in groups that far exceed 1,000. But we won’t have to wait around for robots built on the nanoscale. Human-scale robots will benefit from coordination too—examples may include disaster response, environmental cleanup, drone delivery systems, or self-driving cars.

“Increasingly, we’re going to see large numbers of robots working together, whether it’s hundreds of robots cooperating to achieve environmental cleanup or a quick disaster response, or millions of self-driving cars on our highways,” Nagpal said.

Learn more about the research at the Harvard Gazette, “The 1,000-robot swarm.”

Image Credit: Harvard University/YouTube