August 2014
Mon Tue Wed Thu Fri Sat Sun
 123
45678910
11121314151617
18192021222324
25262728293031

Month August 2014

3D Scanner Digitally Immortalizes Invaluable Masterpieces in Five Minutes Flat

Singularity Hub
3D Scanner Digitally Immortalizes Invaluable Masterpieces in Five Minutes Flat

cultlab-3d-3d-scanner 3

Last year, the Smithsonian opened a virtual museum. With Smithsonian X 3D Explorer users can take a virtual tour of (and even 3D print) high-definition digital models of artifacts like Lincoln’s life mask or the Wright Brother’s plane.

It’s a brilliant concept.

Once digital, artifacts can be easily accessed and explored in detail by researchers, students, and ordinary museum goers. Whereas, most museums only keep a fraction of their inventory on display—a digital museum can show it all off all the time. And of course, digitizing objects saves them for posterity should they be lost or damaged.

3D model of Abraham Lincoln life mask at the Smithsonian.

Digital 3D model of Clark Mills’ Abraham Lincoln life mask at the Smithsonian.

But there’s a problem. The Smithsonian project? Laborious and expensive.

The team took over eight months to scan and 3D model just twenty objects. Meanwhile, the Smithsonian collection comprises 137 million artifacts. As the Smithsonian’s Günter Waibel writes, “Capturing the entire collection at a rate of 1 item per minute would take over 260 years of 24/7 effort.”

Clearly, at the current slower rate, it would take decades to digitize even a tiny fraction of the world’s cultural heritage. But the problem, that the process is slow and labor intensive, is a familiar challenge with a familiar solution. Robotics and automation.

Recently, the German visual computing research institution, Fraunhofer IGD, unveiled an automated 3D scanning system, CultLab3D, that they hope will increase the rate at which we can 3D scan artifacts for a tenth or twentieth of today’s cost.

How does it work? Objects on the machine’s conveyor belt are positioned in the middle of two concentric imaging arcs. A ring of lights and high resolution cameras scan the object from all angles, and after the software spot checks the resulting 3D model, a scanner on a robotic arm moves in to re-image any gaps in the model.

Beyond recording geometry, CultLab3D can reconstruct and record texture and optical materials properties with sub-millimeter detail. The process is fully automated and takes roughly five minutes to complete. (Although much faster than manual methods, it would still take a concerted effort over many years to digitize the world’s most valuable pieces.)

Currently, the system can scan artifacts weighing up to 110 pounds that are no more than 2 feet tall and 2 feet in diameter. But the team has plans for automated solutions to image larger objects too—a robotic arm on wheels, for example, and for the biggest objects like buildings or monuments, robotic drones. (These latter devices may prove more difficult to develop.)

Will we soon 3D scan entire monuments and buildings by drone?

Will we soon be able to 3D scan entire monuments and buildings by drone?

Last month, a prototype CultLab3D scanner conducted a test run in Frankfurt’s Liebieghaus Skulpturensammlung museum. Fraunhofer hopes to run further tests this year but says they’ll be ready to begin production and marketing in 2015.

Whether CultLab3D is the best machine for the job remains to be seen. What’s more certain is that it’s a job worth doing.

The loss or destruction of priceless cultural artifacts by natural disaster or war is all too common. Of course, most people know about the tragic burning of the ancient world’s Alexandria Library and its hoard of knowledge. But similarly destructive events occur to this day, whether it’s destruction by fire, earthquake, or at the hands of soldiers.

Beyond simply preserving artifacts, however, creating high resolution digital copies allows for any number of otherwise impossible applications.

These include instant access to artifacts from anywhere in the world. Today, you might tour them on a screen, tomorrow on an Oculus Rift or other virtual reality device. Observers can get as up close and personal as they like without harm.

And just as we can copy, store, share, and sample masterpieces of music digitally—the same would be true of the world’s great sculptures and monuments. They might be used to populate future virtual worlds or re-materialize, picture perfect under the nozzle of a 3D printer or a digitally guided milling machine.

We’ve already begun digitally backing up and decentralizing copies of humanity’s accumulated knowledge in print—it’s about time we began to back up all the rest.

Image Credit: Norbert Miguletz/Liebieghaus Skulpturensammlung, Fraunhofer Institute for Computer Graphics Research; Smithsonian

We Justify Human Suffering Because We’ve Never Had a Choice in the Matter

Singularity Hub
We Justify Human Suffering Because We’ve Never Had a Choice in the Matter

silva-end-of-suffering-41

Buddha believed the way to end human suffering was the regular practice of meditation and introspection. But Buddha didn’t have biotech.

If our suffering stems from biological and genetic factors and a cocktail of bodily substances indifferently playing havoc with our moods—why couldn’t we tweak our biology to favor pleasure over suffering? To banish depression and malaise?

Philosopher David Pearce says not only will we be able to do this using nanotechnology and genetic engineering, but it’s our moral responsibility to make sure it happens. In Jason Silva’s latest video short, he speculates on Pearce’s Hedonistic Imperative.

As far as we know, humans are the only species conscious of their own mortality. The theme has dominated human thought for ages untold. Philosophy and religion, built brick by brick over millennia, aim to ease our anxiety over death and impermanence.

Much of our musing has focused on how best to deal with or justify death and suffering because, of all our problems, they look the most unassailable, the least likely to yield to technology. Our mortality motivates us to do great works, we say. Suffering informs deep insights about ourselves. Pleasure is only pleasurable relative to pain.

Have we assigned positive attributes to human suffering because there was no alternative in sight?

Do we assign positive attributes to human suffering because there is no alternative?

Above all, it’s often said that because death and suffering are a natural part of life, we should resign ourselves to them. In Meditations, Roman emperor and stoic philosopher Marcus Aurelius said, “Despise not death, but welcome it, for nature wills it like all else.”

But biotechnology wasn’t even the hint of a mote in the eye of Marcus Aurelius. And it’s fascinating that the modern mind simultaneously rebels against its own mortality and at the thought of abolishing death and suffering.

Perhaps it’s because we take the idea, pretty radical in itself, more seriously now than we have before. And the worry is we’ll fly too close to the sun on wings we don’t fully understand. From there it’s the long plunge into dystopia, a future scrubbed of that which makes us human.

This worry is most readily traced to literary works of science fiction like Aldous Huxley’s Brave New World among others. Huxley’s dystopic vision may not be a perfect analogy, but it is the one most easily conjured.

silva-end-of-suffering-2

We worry technology may cross an invisible line, past which there’s no going back.

Such concerns remind me of a quote I recently read from the early 20th century astronomer and physicist Sir Arthur Eddington, “Science is an edged tool with which men play like children and cut their own fingers.”

Even with all our technology, all our data, what makes us think we can consciously arrange the world from the top down? That we can optimize nature? As the classical liberal economist Friedrich Hayek said, “The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.”

Dystopian ideas may serve a function, bumping up discussion and debate over the dangers of emerging technologies by extrapolating to a time when they are actually feasible. This preemptive debate can inform how we choose to use new technology.

But we shouldn’t get so caught up in our shadowy imaginings that we mistake our favorite ideal types and literary devices for accurate forecasts. We’ll not choose the darkest paths. And we do have a choice.

silva-end-of-suffering-3

Instead of being forced upon us, we freely elect which technologies to use or do away with.

We are always experimenting with new technologies. Picking this up, dropping that. The best technologies, the ones that most effectively fulfill our needs, rise to the top on the votes each of us cast when we decide to use or discard them.

The technology we adopt then is the result of all these little choices added up over time. Technologies come and go, are born, mature, and die. It’s in this messy, generational process that we gradually come to see thorny existential questions in a new light.

This process is not at all new. It’s been going on since the Stone Age. Will we put an end to suffering? I don’t know. But we’ve been moving in that direction for as long as we’ve had tools. Technology is, after all, an attempt to improve human lives.

There are numerous examples of technology adding to our suffering too—but the net result  of our incurable tinkering is that most of us suffer far less now than in ages past.

Isn’t it reasonable to believe that trend will continue?

Image Credit: Shots of Awe/YouTube

Eavesdrop on Conversations Using a Bag of Chips with MIT’s ‘Visual Microphone’

Singularity Hub
Eavesdrop on Conversations Using a Bag of Chips with MIT’s ‘Visual Microphone’

video-reconstructed-conversation 1

MIT’s ‘visual microphone’ is the kind of tool you’d expect Q to develop for James Bond, or to be used by nefarious government snoops listening in on Jason Bourne. It’s like these things except for one crucial thing—this is the real deal.

Describing their work in a paper, researchers led by MIT engineering graduate student, Abe Davis, say they’ve learned to recover entire conversations and music by simply videoing and analyzing the vibrations of a bag of chips or a plant’s leaves.

The researchers use a high-speed camera to record items—a candy wrapper, a chip bag, or a plant—as they almost invisibly vibrate to voices in conversation or music or any other sound. Then, using an algorithm based on prior research, they analyze the motions of each item to reconstruct the sounds behind each vibration.

The result? Whatever you say next to that random bag of chips lying on the kitchen table can and will be held against you in a court of law. (Hypothetically.)

The technique is accurate to a tiny fraction of a pixel and can reconstruct sound based on how the edges of those pixels change in color due to sound vibration. It works equally well in the same room or at a distance through soundproof glass.

The results are impressive (check out the video below). The researchers use their algorithm to digitally reassemble the notes and words of “Mary Had a Little Lamb” with surprising fidelity, and later, the Queen song “Under Pressure” with enough detail to identify it using the mobile music recognition app, Shazam.

While the visual microphone is cool, it has limitations.

The group was able to make it work at a distance of about 15 feet, but they haven’t tested longer distances. And not all materials are created equal. Plastic bags, foam cups, and foil were best. Water and plants came next. The worst materials, bricks for example, were heavy and only poorly conveyed local vibrations.

Also, the camera matters. The best results were obtained from high-speed cameras capable of recording 2,000 to 6,000 frames per second (fps)—not the highest frame rate out there, but orders of magnitude higher than your typical smartphone.

Even so, the researchers were also able to reproduce intelligible sound using a special technique that exploits the way many standard cameras record video.

Your smartphone, for example, uses a rolling shutter. Instead of recording a frame all at once, it records it line by line, moving from side to side. This isn’t ideal for image quality, but the distortions it produces infer motion the MIT team’s algorithm can read.

The result is more noisy than the sounds reconstructed using a high-speed camera. But theoretically, it lays the groundwork for reconstructing audio information, from a conversation to a song, using no more than a smartphone camera.

Primed by the news cycle, the mind is almost magnetically drawn to surveillance and privacy issues. And of course the technology could be used for both good and evil by law enforcement, intelligence agencies, or criminal organizations.

However, though the MIT method is passive, the result isn’t necessarily so different from current techniques. Surveillance organizations can already point a laser at an item in a room and infer sounds based on how the light scatters or how its phase changes.

And beyond surveillance and intelligence, Davis thinks it will prove useful as a way to visually analyze the composition of materials or the acoustics of a concert hall. And of course, the most amazing applications are the ones we can’t imagine.

None of this would be remotely possible without modern computing. The world is full of information encoded in the unseen. We’ve extended our vision across the spectrum, from atoms to remote galaxies. Now, technology is enabling us to see sound.

What other hidden information will we one day mine with a few clever algorithms?

Image Credit: MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)/YouTube

The Innovator’s New Dilemma: The Serious Emotional Toll Of Entrepreneurial Failure

Singularity Hub
The Innovator’s New Dilemma: The Serious Emotional Toll Of Entrepreneurial Failure

going-out-of-business

Nobody ever has a bad day in Silicon Valley. Seriously. Not ever.

Of course this isn’t true. Every day people have bad days in Silicon Valley. They get fired and divorced and demoted, and their startup is tanking, and their kids won’t stop screaming, and the car won’t start, and the mortgage payment is due and all the rest—yet you rarely hear much about it.

Sure, occasionally, someone may sit you down to talk through some “issues” surrounding raising capital or the occasional “messaging” problem, but it’s rare to have a deep, intimate, personal conversation (though big think conversations abound) about the current state of the ass-kicking life is dishing out.

Before we get into the reasons I’m bringing this up, I want to first try to flesh out this idea a bit via comparison. I have lived in 17 cities in my life and one of the things I’ve noticed is emotive expression—a term I’m using to get at the depth of intimacy people display—is heavily influenced by both geography and culture.

For example, I live in Northern New Mexico (though am in SV about one-fifth of the year). This region is very, very heavy with Penitentes—a branch of Catholicism that believes individual should atone for their sins via self-flagellation, carrying heavy crosses, and the like. To put this in different terms, Northern New Mexico culture is built, at least in part, on suffering. Suffering is a badge of honor. Thus, when you bump into someone at the grocery store, most of the time, after you ask how they are, the answer you get is about what new illness or catastrophe is currently plaguing them or their families.

happy-face-ballOr take the Midwest. I grew up in Illinois and Ohio and went to college in Wisconsin. From my time there, I can tell you the standard “work hard, don’t lie” Midwest creed, extends into how folks from the center of the country express emotion. People have bad days in the Midwest. And they’ll tell you about it. But they don’t wallow. The culture is built on a “soldier on” mentality, so what you hear about there is the massive roadblock they (that is, Midwesterners) encountered, but the conversation quickly shifts to what they’re doing to get beyond it.

But this kind of emotive expression isn’t typical in Silicon Valley, at least not in my experience. And in other people’s experiences as well. Jessica Bruder in a great article for Inc. about the “psychological price of entrepreneurship” noticed something similar:

Successful entrepreneurs achieve hero status in our culture. We idolize the Mark Zuckerbergs and the Elon Musks. And we celebrate the blazingly fast growth of the Inc. 500 companies. But many of those entrepreneurs, like Smith, harbor secret demons: Before they made it big, they struggled through moments of near-debilitating anxiety and despair–times when it seemed everything might crumble.

Until recently, admitting such sentiments was taboo. Rather than showing vulnerability, business leaders have practiced what social psychiatrists call impression management–also known as “fake it till you make it.” Toby Thomas, CEO of EnSite Solutions (No. 188 on the Inc. 500), explains the phenomenon with his favorite analogy: a man riding a lion. “People look at him and think, This guy’s really got it together! He’s brave!” says Thomas. “And the man riding the lion is thinking, How the hell did I get on a lion, and how do I keep from getting eaten?”

The reason I mention this is because one of the core philosophical tenets in Silicon Valley is “Fail Fast, Fail Often, Fail Forward.” This idea shows up everywhere. Blogs, articles, books (including my own, see Abundance). Hell, Facebook has a sign hanging in their main stairwell reading: “Move Fast, Break Things.”

And for good reason. This is a great tenet. Rapid iteration, especially in this era of exponential technology, is fundamentally critical to success. Fast failure is the fuel that drives the rapid iteration engine. This is why Reid Hoffman famously said: “If an entrepreneur isn’t embarrassed by their first launch, then they launched too late.” The whole minimum viable product ethos is about failing faster, so is iterative design and agile manufacturing and many of the biggest business ideas around today.

But the entire subtext of the failure motto is to diagnose the error, learn from it, and move on to the next iteration as quickly as possible. To do this, you can’t hide the failure, you must bring it out into the sunlight and analyze the ever-living hell out of it.

Yet, in the interpersonal lives on Valley denizens (and entrepreneurs in general), people fail just as fast, but these failures are never celebrated. They’re not diagnosed. Mostly, they’re hidden away. Mostly, it’s like they never happened.

For certain, this isn’t the best strategy for personal learning and growth. For certain, this constant attempt to bury emotion has long-term health consequences. For certain, there are real world business consequences as well (anyone who is trying to hide their feelings is using up a lot of energy, it’s certainly going to impact things like focus, resilience and overall performance). For certain, this list goes on.

In other words, there’s a hidden emotional cost to all this emphasis on failure, but no one wants to talk about paying that piper. Failure is emphasized, but the fact that failing is depressing, demoralizing and debilitating doesn’t come into the discussion.

Two issues. First, by ignoring the emotional side of this coin, you’re loosing critical information—information about what went wrong and what can go right next time.

Second, not many of us are that tough. Not over the long haul. Bury this kind of deep pain too often and, as we all know, sooner or later it’s coming back with a vengeance.

Not that I have any real suggestions here. My only thinking is that if we’re going to emphasize and even incentivize failure, then it might be useful to find a way to manage the emotional fallout. Meaning, with all this technological innovation coming out of Silicon Valley (and other entrepreneurial hotbeds), isn’t it time for a little emotional innovation as well.

[Photo credit: Clyde Robinson/Flickr, Happy face ball/Wikipedia]

This Week’s Awesome Stories from Around the Web

Singularity Hub
This Week’s Awesome Stories from Around the Web

wearable tech

Since last week’s reading list was well received, we’re serving up another round of the most intriguing articles in science and technology this week. And if you see one that we missed, feel free to add them in the comments.

Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask | WIRED
“‘I’m extremely proud of Siri and the impact it’s had on the world, but in many ways it could have been more…now I want to do something bigger than mobile, bigger than consumer, bigger than desktop or enterprise. I want to do something that could fundamentally change the way software is built.’”

Crowdfunding and Venture Funding: More Alike Than You Think | NY Times 
“’The crowd is often thought as being crazy. There was a sense that they would back musicals about Internet cats, and experts would back serious work…it turns out the crowd does consider the quality of projects and outcomes pretty well.’”

Building Mind-Controlled Gadgets Just Got Easier | IEEE Spectrum
“‘The first time you get something to move with your brain, the satisfaction is pretty amazing.’”

No, Dystopian Sci-Fi Isn’t Bad for Society. We Need It More Than Ever | WIRED
“With so much technology constantly barraging us with the idea that every innovation is going to “change the world for the better,” where better to suss out the good and the bad than in fiction?”

First-Person Hyperlapse Videos | Microsoft.com
“We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyper-lapse videos, i.e., time-lapse videos with a smoothly moving camera.”

Plants May Use Newly Discovered Molecular Language To Communicate | Science Daily
“‘The discovery of this novel form of inter-organism communication shows that this is happening a lot more than any one has previously realized.’”

The Maker’s Mark: Yves Behar is the man behind Silicon Valley’s most beautiful gadgets | The Verge
“’The secret,” he says, “is in the relationship between the person and the object.’”

Is Encrypted Messaging Entering the Mainstream? | The Wall Street Journal
“‘If asked whether they think privacy is very important, [consumers] will likely answer very positively, but still share lots of personal information quite freely on social networking sites or in exchange to get a discount or money off voucher.’”

Humans Need  Not Apply | CGP Grey/YouTube
“Just as mechanical muscles made human labor less in demand, so are mechanical minds making human brain labor less in demand.”

[Image credit: Keoni Cabral/Flickr]

Lab-Grown Neurons Deliver a Real-Time Glimpse Into How the Brain Works

Singularity Hub
Lab-Grown Neurons Deliver a Real-Time Glimpse Into How the Brain Works

nuerons-in-gel-scaffold 1

Currently, researchers study the human brain by inference. Because they can’t closely observe a living brain in the lab as its owner goes about his day—they do the next best thing, tracking blood flow and electrical activity as subjects perform various tasks.

Scientists, however, are now growing brain tissue in petri dishes to study neurons up close and personal. So far this lab-grown tissue can only grow in two dimensions, instead of the brain’s native three dimensions, and doesn’t form the typical segmented structure of grey matter (neuron cell bodies) and white matter (axon bundles).

However, in work funded by the National Institute of Biomedical Imaging and Bioengineering, David Kaplan of Tufts University and a team of researchers have created 3D brain-like tissue (using rat neurons) that is functionally and structurally similar to tissue in a living rat brain. Further, in contrast to prior methods, the tissue is long-lived—persisting for more than two months in the lab.

To demonstrate its potential as a research tool, the team used it to study the chemical and electrical changes induced by a traumatic brain injury and, separately, the changes due to a drug. Injuries and brain ailments affect the brain’s grey matter and white matter differently, so fully three dimensional tissue like theirs may enrich brain models.

When the scientists simulated brain injury by dropping a weight on the tissue, the behavior of the neurons in the tissue matched observations made in studies of similar injuries in animals. The method may prove superior to animal studies because there is no delay in observation (animal studies require dissection and sample preparation).

“With the system we have, you can essentially track the tissue response to traumatic brain injury in real time,” said Kaplan. “Most importantly, you can also start to track repair and what happens over longer periods of time.”

microscopic-silk-sponge-brain

Image of the silk-based scaffold used to grow the brain-like tissue taken with a scanning electron microscope reveals its porous, sponge-like composition.

Other attempts to grow 3D brain tissue in gels—where neurons are free to grow and make connections in all directions—have failed to result in long-lived or healthy function in tissue. Providing enough space, as it turns out, is not the only requirement. Getting the composition of the environment is critical.

Kaplan’s team grew neurons on a scaffold (top image) with a central collagen-based gel surrounded by a donut of silk protein. The neurons anchored themselves to the protein and grew axons through the gel center. A few days later, the cells had grown networks in the silk (grey matter) and sent axons through the gel (white matter).

That the tissue is healthy and long-lived is significant.

According to Kaplan, “The fact that we can maintain this tissue for months in the lab means we can start to look at neurological diseases in ways that you can’t otherwise because you need long timeframes to study some of the key brain diseases.”

The team hopes to improve their method and make the tissue an even better brain analog by building a scaffold of six concentric rings separated by gel and populated by the six kinds of neurons that make up the various layers of the human brain’s cortex.

They hope that better models derived by studying living tissue in the lab will beget better candidate hypotheses for testing, and ultimately, all this will quicken the development of therapies for diseases and further our understanding of how healthy brains work.

Learn more about the research at National Institute of Biomedical Imaging and Bioengineering, “Bioengineers Create Functional 3D Brain-like Tissue.”

Image Credit: Tufts University

Burger Robot Poised to Disrupt Fast Food Industry

Singularity Hub
Burger Robot Poised to Disrupt Fast Food Industry

robot-burger-momentum-machines 1

I saw the future of work in a San Francisco garage two years ago. Or rather, I was in proximity to the future of work, but happened to be looking the other direction.

At the time, I was visiting a space startup building satellites behind a carport. But just behind them—a robot was cooking up burgers. The inventors of the burger device? Momentum Machines, and they’re serious about fast food productivity.

“Our device isn’t meant to make employees more efficient,” cofounder Alexandros Vardakostas has said. “It’s meant to completely obviate them.”

robot-vegetables-momentum-machinesThe Momentum burger-bot isn’t remotely humanoid. You can forget visions of Futurama’s Bender. It’s more of a burger assembly line. Ingredients are stored in automated containers along the line. Instead of pre-prepared veggies, cheese, and ground beef—the bot chars, slices, dices, and assembles it all fresh.

Why would talented engineers schooled at Berkeley, Stanford, UCSB, and USC with experience at Tesla and NASA bother with burger-bots? Robots are increasingly capable of jobs once thought the sole domain of humans—and that’s a huge opportunity.

Burger robots may improve consistency and sanitation, and they can knock out a rush like nobody’s business. Momentum’s robot can make a burger in 10 seconds (360/hr). Fast yes, but also superior quality. Because the restaurant is free to spend its savings on better ingredients, it can make gourmet burgers at fast food prices.

Or at least, that’s the idea.

Momentum Machines says your average fast food joint spends $135,000 a year on burger line cooks. Employees work in a chaotic kitchen environment that necessitates no-slip shoes in addition to the standard hairnets and aprons.

Momentum Machines' burger robot looks nothing like this retro-robot chef of the future—but it's still awesome.

Momentum Machines’ burger robot looks nothing like this retro robot chef—but it’s still awesome.

By replacing human cooks, the machine reduces liability, management duties, and, at just 24 square feet, the overall food preparation footprint. Resources once dedicated to preparation can instead fund better service.

Of course, businesses are free to spend their savings however they like.

For some, that may mean more quality ingredients or services. For others, it might be competing with other restaurants by maintaining the same level of service and ingredients but offering even lower food prices.

But Momentum Machines’ burger-bot isn’t provocative for its anticipated effects on fast food quality. The bot, and other robots like it, may soon replace low-skilled workers in droves. If one machine developed in a garage in San Francisco can do away with an entire kitchen of fast food staff—what other jobs are about to disappear?

Earlier this year, McDonalds employees protested outside the fast food chain’s corporate headquarters in Chicago, demanding higher wages. A robotic kitchen might bring improved pay for the front of the house, and a pay cut to zero for the back. Some fraction of the 3.6 million US fast food jobs might be automated by such technology.

While the burger-bot hasn’t taken anyone’s job yet, Momentum Machines is clearly sensitive to the worry. The firm says they want to help support those who may lose work as a direct effect of restaurants adopting the robot.

“We want to help the people who may transition to a new job as a result of our technology the best way we know how: education.”

As new technology destroys one kind of job, it creates opportunities for others. We’ll need fewer line cooks, they say, but more engineers and technicians. The problem isn’t that jobs are lost on net, it’s the resulting skills gap. Transitioning into new work can be difficult to navigate, especially for low-wage workers.

Momentum Machines wants to help ease the move by partnering with vocational schools to offer discounted technical training for anyone displaced by their robot. Their goal is indicative of the overall tenor of an increasingly heated debate about how AI and robot employment may reduce human employment in the near future.

In a recent Pew Survey, some 1,900 technology experts agreed that robots will be a pervasive part of daily life by 2025. Automation will infiltrate industries like health care, transport and logistics, customer service, and home maintenance.

experts-split-on-robotsHowever, those polled were split on whether the impending wave of automation would be good or bad for workers: 52% believe AI and robotics will be a net positive for employment, and 48% believe the opposite.

The typical line of reasoning from the positive camp is that we’ve consistently been shedding “traditional” jobs and replacing them with brand new modes of work for the last few hundred years. In the early decades of the 20th century, most people were farmers or factory workers. Now, thanks to huge technological productivity gains, agricultural and factory workers are about 2% of the workforce respectively.

Has this resulted in massive unemployment? Quite the opposite. A profusion of new jobs that didn’t exist back then and were unimaginable to even the far-sighted have taken their place. Further, the quality of life for most people has improved drastically. This is what history indicates should happen again with advanced AI and robots.

The negative camp is less sure our technological creations will prove to be a good thing overall. They say something’s different this time.

technological-unemployment-line 1

Are we destined for a painful period of adjustment to powerful new forms of automation?

Two MIT economists associated with the topic of technological unemployment, Eric Brynjolfsson and Andrew McAfee, have written two books on the subject. They think robots more advanced than Momentum Machines’ burger cook will soon arrive, and while they will be a force for good in the long run, they’ll displace human workers and cause strife well before we get there.

The problem isn’t that new jobs won’t be created, but that the looming transition period will be more difficult to navigate because the speed, depth, and breadth of the change will be unrivaled.

Might companies like Momentum Machines offering to foot some of the bill to retrain workers be the solution? Maybe. Others think we’ll need more drastic policies, like a guaranteed minimum income. But perhaps we’re already better equipped to adapt to leaps in automation and productivity today and in the near future than we realize.

Today’s work force is as flexible as it’s ever been. We can more easily search job opportunities online; we’re less geographically limited; we have quick access to masses of information; we can earn technical degrees and certificates online; and many people already cycle through multiple roles in multiple industries during their careers.

That’s not to say automation won’t present challenges for some people. It always has. And the topic will almost undoubtedly get more politically divisive from here. But the burger-bots are coming, and we think the net result over time will be as positive in the coming decades as it has been time and again over the last two centuries.

Image Credit: Momentum Machines; Sam Howzit/FlickrMax Kiesler/Flickr; woodleywonderworks/Flickr

Pen that Scans and Draws in Millions of Colors Finally Arrives on Kickstarter

Singularity Hub
Pen that Scans and Draws in Millions of Colors Finally Arrives on Kickstarter

scribble-digital-pen-schematic

Ever bought a king-size box of colored pencils and marveled at all the names? Burnt sienna, cerulean blue, tuscan red. The world is overflowing with colors, too many to count or name. What if you had a single pen that contained them all?

The Scribble color matching pen (or stylus) uses a color sensor and LED illumination to sample and upload colors (say from your wall or a piece of clothing) to a mobile device or computer, and then reverses the process, allowing you to draw in any color.

The pen works like a handheld printer, using its ARM 9 microprocessor to digitize colors and mix the inks in an onboard CMYK cartridge. Scribble can reproduce over 16 million colors—100,000 of which can be stored on its 1 GB onboard chip. It runs on a rechargeable lithium ion battery and connects by Bluetooth or micro USB.

Scribble not only offers multiple colors all in one package, it also offers multiple stroke weights with its replaceable nib and six tip sizes. The pen works with iOS, Android, PC, and Mac and is compatible with Photoshop and Corel.

It should probably be noted this isn’t an entirely new idea. It’s been floating around for awhile—see this conceptual design by Jinsun Park, for example. But no one (as far as we can tell) has yet succeeded in making it into a tangible product for sale.

Though the Scribble team has a working prototype, they needed Kickstarter to fund creation of the final product. If all goes to plan—and often it doesn’t, as nailing down a manufacturing process can be a sticking point—it’ll be available late next spring.

The team has big expectations, saying in their first press release Scribble is “cutting edge technology that’s on the verge of becoming a household gadget.” They may be right. The Kickstarter was funded in five hours and is currently closing in on $300,000 with more than a month left in the campaign.

But going from Kickstarter to household gadget won’t be easy. For one thing, $149 (more than an 8GB Kindle Fire) isn’t cheap. And refill cartridges ($15 to $30 each) will add to costs, maybe appreciably, depending on how quickly users run out of ink.

Scribble may be too expensive for kids or technophobic crafters. Also, the pen can store lots of colors, but how easy it is to find and switch them isn’t clear. Is an external device required every time you want a new color? What’s the pen’s interface like? Depending too much on a nearby computer or mobile device might limit some of its perceived utility.

For half the price, the stylus which lives more exclusively in the digital world (and no ink required), might be better value. Designers could sample colors for clients on the spot or match mystery paint on the wall. The color blind could use it to identify colors. These presume a degree of accuracy, but they seem like reasonable applications.

At the same time simple color scanning functionality can already be found in free smartphone apps. Maybe they aren’t as accurate, but such apps using smartphone cameras are available for Android and iOS. And smartphones may soon come stock with new visual sensors. Google Tango or this new Microsoft Research device show that the visual powers of mobile devices are poised to move beyond simple cameras.

Will Scribble become a household device? Or will some of its powers be usurped by smartphones? We don’t know. In any case, it’s still a cool example of miniature tech going mobile and miniature sensors providing a two-way link between real and digital.

Image Credit: Scribble

Top 10 Reasons Drones Are Disruptive

Singularity Hub
Top 10 Reasons Drones Are Disruptive

drone-vs-cow

If you think today’s drones are interesting, you ain’t seen nothing yet.

Drones are in their deceptive phase, about to go disruptive. Check out where they’re going…

What makes today’s “drones” possible?

The billion-fold improvement we’ve seen between 1980 and 2010 is what makes today’s drones possible, specifically in four areas:

1. GPS: In 1981, the first commercial GPS receiver weighed 50 pounds and cost over $100K. Today, GPS comes on a 0.3 gram chip for less than $5.
2. IMU: An Inertial Measurement Unit (IMU) measures a drone’s velocity, orientation and accelerations. In the 1960s an IMU (think Apollo program) weighed over 50 lbs. and cost millions. Today it’s a couple of chips for $1 on your phone.
3. Digital Cameras: In 1976, Kodak’s first digital camera shot at 0.1 megapixels, weighed 3.75 pounds and cost over $10,000. Today’s digital cameras are a billion-fold better (1000x resolution, 1000x smaller and 100x cheaper).
4. Computers & Wireless Communication (Wi-Fi, Bluetooth): No question here. Computers and wireless price-performance have gotten a billion times better between 1980 and today.

10 Industries Using Today’s Drones:

1. Agriculture: Drones watch for disease and collect real-time data on crop health and yields. This is an estimated $3B annual market size.
2. Energy: Energy companies monitor miles of pipeline and oil rigs with autonomous drones.
3. Real Estate and Construction: Drones photograph, prospect and advertise real estate from golf courses to skyscrapers; they also monitor construction in progress.
4. Rapid Response and Emergency Services: Drones aid in search and rescue operations ranging from forest fire fighting to searching for people buried in rubble or snow using infrared sensors.
5. News: It’s faster and safer to deploy drones to cover breaking news/disaster/war zones than news crews.
6. Package/Supply Delivery: Companies like Matternet (founded at Singularity University) are building networks of UAVs to deliver food and medical supplies to remote villages around the world.
7. Photography/Film: Visual artists use drones to capture beautiful new images and camera angles.
8. Scientific Research/Conservation: Drones assist in everything from counting sea lions in Alaska to conducting weather and environmental research to tracking herd movements on the Savannah in Africa.
9. Law Enforcement: Drones can be used during hostage situations, search and rescue operations, bomb threats, when officers need to pursue armed criminals, and to monitor drug trafficking across our borders.
10. Entertainment/Toys: Good old fun.

Where Next?

What happens in the next 10 years when drones are 1000x better? Or 30 years from now when they are 1,000,000,000x better? What does that even mean, or look like? Here are some directions for your imagination:

Smart and Autonomous: Drones will have a mind of their own… thinking, doing, navigating, avoiding, seeking, finding, sensing and transmitting.

Microscopic and Cheap: Think about drones the size of a housefly, sending you full-motion HD video. Think swarms of drones (hundreds) where losing half of your swarm won’t matter because another hundred are there to replace them. How much will they cost? I would be shocked if they price doesn’t plummet to less than $10 each… maybe $1.

drones2

Top Future Drone Applications?

1. Pollination: Imagine bee-sized drones pollinating flowers (in fact, we’re actually doing this now);
2. Personal security: In the future, your children will have a flotilla of micro-drones following them to school and to playgrounds at all times, scanning for danger;
3. Action sports photography: Imagine 100 micro-drone-cameras following a downhill skier capturing video from every angle in real time;
4. Asteroid prospecting and planetary science: On a cosmic scale, my company Planetary Resources is building the ARKYD 300 — effectively a space drone with 5km per second delta-V. PRI plans to send small flotillas of four to six A300 drones (with onboard sensors) to remote locations like the asteroids or the moons of Mars;
5. Medical in-body drones: On the microscopic scale, each of us will have robotic drones traveling through our bodies monitoring and repairing;
6. High Altitude “Atmospheric Satellite” Drones: Google recently announced Project Loon to provide a global network of stratospheric balloons, and then acquired Titan Aerospace to provide for solar powered aerial drones, both of which could blanket the entire planet to provide low-cost Internet connectivity, anytime, anywhere; and,
7. Ubiquitous surveillance: Combined with facial recognition software and high-resolution cameras, drones will know where everybody and everything is at all times. Kiss privacy goodbye. Are you a retailer? Want to know how many people are wearing your product at any time? Future imaging drones will give you that knowledge.
8. Military and Anti-terrorism: Expect a significant increase in defense-related applications of drones in war zones and in your local backyard, sensing and searching for dangers ranging from biological to radiation.

What are the Challenges?

Technical challenges aside, we’ll have to address many sociopolitical challenges before drones become disruptive.

There are concerns over privacy and spying, interference with planes/helicopters, drones aiding illegal activities, safety and potential crashes, noise and cluttering the skies, theft and commercial use.

I recommend looking at the FAA Modernization and Reform Act of 2012 to get a glimpse of the legal landscape surrounding drones.

This bill expires in September of 2015.

In other words, pending major legislative changes, expect 2015 to be a big year for drones.

Why are drones going to be disruptive?

Besides all of the use cases outlined above, drones represent an interesting convergence of three exponential technology areas:

1. The Internet of Everything: Drones will be a key part of our trillion-sensor future, transporting a variety of sensors (thermal imaging, pressure, audio, radiation, chemical, biologics, and imaging) and will be connected to the Internet. They will communicate with each other and with operators.
2. Advanced Battery Technology: Increases in energy density (kilowatt-hours per kilogram) will allow drones to operate for longer periods of time. Additionally, solar battery technology is allowing high-altitude drones to fly for weeks at a time without landing.
3. Automation Software and Artificial Intelligence: Hundreds of teams around the world are working on automation systems that a) make drones easier for untrained users to fly, but more importantly, b) allow drones to fly and operate autonomously.

This is just the start.

[Photo credit: Lima Pix/FlickrMichael MK Khor/Flickr]

 

More from Peter Diamandis:

At my Abundance 360 Executive Summit in January 2015, we’ll discuss this in much more detail and talk about potential investment opportunities in this arena. If you’re interested in joining me, there are only a few slots left. Apply here.

Every weekend I send out a blog like this one with my latest insights on technology. To make sure you never miss one, head to www.AbundanceHub.com to sign up for this and my Abundance blogs. And if you want my personal coaching on these topics, consider joining my Abundance 360 coaching program for entrepreneurs.

These Battery-Free, WiFi Devices Run On Radio Waves

Singularity Hub
These Battery-Free, WiFi Devices Run On Radio Waves

battery-free-devices-internet-of-things

In the last decade, mobile devices have become radically smaller and more powerful. The list of tech-related tasks that the miniature black monolith we all tote around has grown longer by the year. The next step in technology’s great disappearing act? Absorption into our clothes, body, and environment.

The question of how best to power that next step, however, remains an open one.

Wearable and Internet-of-Things technologies need to be ‘on’ all the time. In the former case, taking something on or off for recharging—like a health monitoring device—causes data loss and increases the chances it won’t be used as much as it should or at all. And you don’t want to wire or charge sensors embedded throughout smart homes, offices, cities.

But what if these devices could pull enough power wirelessly from the air to run themselves and send signals? Sound like sci-fi? Not so according to a group of University of Washington engineers who are building a communication system called WiFi backscatter—the system powers devices using radio waves and connects them to laptops or smartphones over WiFi networks.

Previous research had shown it possible to run low-power devices off radio, TV, and wireless waves—the most recent work, however, took these devices a step further by allowing them to send their own signal using far less power than is usually required.

“If Internet of Things devices are going to take off, we must provide connectivity to the potentially billions of battery-free devices that will be embedded in everyday objects,” said Shyam Gollakota, a UW assistant professor of computer science and engineering. “We now have the ability to enable WiFi connectivity for devices while consuming orders of magnitude less power than what WiFi typically requires.”

How does it work? The team made a tag that listens for WiFi signals being sent from a local router to a laptop or smartphone and vice versa. The tag’s antenna encodes data by selectively reflecting or absorbing the signals. This selective reflection makes tiny changes in signal strength that can be detected and decoded by other devices.

Using this method, more powerful central devices like smartphones, tablets, or laptops can communicate with a range of low-power devices and sensors within about two meters and at a rate of one kilobit per second.

A pair of smart socks could, for example, relay data about your jog to a jogging app on your phone. Or temperature sensors throughout your house could communicate with thermostats to maintain an optimal temperature inside.

Joshua Smith, a co-author of an upcoming paper on the system and UW associate professor of computer science and engineering and electrical engineering, says that although the signals are tiny and could well be lost in noise, the devices in the system are able to detect them because they know which specific patterns to look for.

The team will present their findings at the ACM Sigcomm annual conference in Chicago. They are working to extend the system’s range to 20 meters, have filed patents, and hope to start a company based on the technology.

 “Given the prevalence of WiFi, this provides a great way to get low-power Internet of Things devices to communicate with a large swath of devices around us,” Ranveer Chandra, a senior researcher in mobile computing at Microsoft Research, told the MIT Technology Review.

Read more at Science Daily: No-power Wi-Fi connectivity could fuel Internet of Things reality

Image Credit: Shutterstock.com