October 2014
Mon Tue Wed Thu Fri Sat Sun
« Sep    
 12345
6789101112
13141516171819
20212223242526
2728293031  

Month October 2014

Next-Gen Diagnostics, Nanotechnology, and Bioengineering Begin Taking the Fight to Cancer

Singularity Hub
Next-Gen Diagnostics, Nanotechnology, and Bioengineering Begin Taking the Fight to Cancer

MIT-SuperNanoparticle-magnetic-11

According to the World Health Organization (WHO), cancer claimed 8.2 million lives worldwide in 2012. Perhaps no other disease highlights the need for improved diagnostic and treatment options better than cancer—which is why it’s good news there continue to be promising developments in the lab.

Here are a few early studies we’ve covered this year.

A Cancer Blood Test for Earlier Diagnosis?

University of Bradford scientists, led by Dr. Diana Anderson, hypothesized that DNA in white blood cells from cancer patients wouldn’t be able to withstand or repair UV damage as well as DNA from healthy volunteers. Why? Because cancer patients’ immune systems would be working overtime even in the disease’s earliest stages.

pancreatic-cancer-blood-test-1After subjecting samples to ultraviolet light, the scientists pulled their DNA apart in electrophoresis gel (a traditional DNA analysis tool). They found white blood cell DNA from healthy patients showed up in short streaks whereas the DNA from precancerous or cancer patients had longer streaks, indicating more damage from the UV light.

Though the samples were randomized and coded, the researchers correctly identified 94 healthy subjects, 58 patients with cancer, and 56 patients in the precancerous stage of disease. The samples were from patients with melanoma, colon cancer, and lung cancer—next steps include broader trials over more types of cancer.

Another study of 1,500 patients at MIT found elevated levels of particular amino acids—leucine, isoleucine, and valine—in the blood may help practitioners spot notoriously difficult-to-diagnose pancreatic cancer 1 to 10 years before current methods.

Though the latter biomarkers may only prove useful in pancreatic cancer diagnosis—they have not been observed in other cancers—taken together, the studies suggest that the body may show subtle signs of the disease well before more obvious symptoms.

In the future, these and other biomarkers may improve our ability to take the fight to cancer earlier and allow for better results with less invasive tests and treatments.

Buy Time by Slowing Cancer’s Spread

Locating a consistently accurate biomarker of cancer in the blood could improve diagnosis—but what then? Cancer is dangerous in one organ, but deadly when it spreads to others. What if we could freeze the disease in its tracks?

stanford-cancer-fighting-protein-1In a recent preclinical study, Jennifer Cochran, a Stanford associate professor of bioengineering, and Amato Giaccia, professor of radiation oncology, say they significantly slowed cancer metastasis in mice using a lab-designed decoy protein.

Tumors metastasize when bristly Axl proteins on a cancer cell’s surface interact with Gas6 proteins. When these two proteins link up, the cell is able to break off from the main tumor and move to other locations in the body.

The Stanford scientists bioengineered a decoy Axl protein designed to be up to a hundred times as effective at binding with Gas6 than the natural version. Deployed in the mice, the decoy protein binds Gas6 proteins in the blood before they can link up with and activate Axl proteins on the cancer cells.

The scientists say they found a 78% reduction in metastatic nodules in mice with breast cancer and a 90% decrease in metastatic nodules in mice with ovarian cancer.

Seek-and-Destroy Cancer Cells—and Only Cancer Cells

After finding and slowing cancer—we need to eliminate it from the body. Current cancer treatments employing radiation and chemotherapy are a bit like a shotgun blast or carpet bombing run. They lack precision and collateral damage is significant. A better option would be something like a laser-guided missile destroying only cancer cells.

MIT-SuperNanoparticle-magnetic-2A recent Rice University study tagged gold nanoparticles—particles only a few billionths of a meter across—with cancer-specific protein antibodies that guided them to tumors and caused them to cluster inside the cancerous cells. Once in position, the researchers blasted the particles with a laser, causing them to burst and destroy host cells.

In addition to the physical destruction of cells, the particles were loaded with cancer-fighting chemotherapy drugs which were released after the laser zap. Finally, the particles, also served to locally magnify X-ray radiation at the site of the tumor.

In a preclinical study, this four-pronged approach (nanoparticles, laser, drugs, and radiation) required only 3% and 6% the standard drug and radiation dose to destroy tumors in mice within a week with minimal damage to surrounding healthy cells.

The team’s antibody method of targeting the particles isn’t the only method on the table. Another study out of Rice showed that iron oxide nanoparticles can be precisely maneuvered inside the body using magnetic fields, and a recent MIT video shows just what the technique looks like using magnetic, fluorescent nanoparticles.

Progress But No Miracles

It’s important to stress that while these results are promising, it is too early to know how effective they will be in the clinic. Cancer diagnostic tests and even treatments can show great potential early on, only to crumble when put through more rigorous testing.

Further, this isn’t a comprehensive of list of the latest cancer fighting tools—it’s a big, well-funded area of research. But generally, earlier diagnosis and better targeted, more personalized therapies, however they’re accomplished, may drastically alter the fight.

As we’re better able to effectively employ these strategies—the probability of survival should increase, even as the treatments themselves become easier to bear.

Article written with contributions from Peniel Dimberu. 

Image Credit: MIT, Shutterstock.com

Advertisements

Travel to a Black Hole’s Edge and a Far Future Where Humans Are Just the Same

Singularity Hub
Travel to a Black Hole’s Edge and a Far Future Where Humans Are Just the Same

into-dusk-sci-fi-short-11

No invention was born outside the mind of man. So, every once in awhile, it’s worth mining a few sci-fi visions for the dark, twisted, unexpected, hopeful, or inspiring.

Astrophysicist Kip Thorne—whose book on black holes sits on my shelf—recently teamed up with director Christopher Nolan of Dark Knight fame to build the perfect CGI black hole for the film, Interstellar­. As we’ve never directly observed a black hole, the team had to construct their visualization from Einstein’s equations, as solved by Thorne.

They say it’s as accurate a depiction as has ever appeared on the silver screen.

Thorne, who had only ever viewed a black hole and its attendant gravitational lensing in his equations, described the the first time he saw the simulation,“I saw this disc wrap up over the black hole and under the black hole. I’d known it intellectually—but knowing it intellectually is completely different than seeing it, than feeling it.”

A physicist’s equations inspire and underpin a computer-generated visual—and the subsequent creation is sufficiently breathtaking to awe him in return. The power of science, art, and technology all in one roiling, relativistic bundle of sci-fi goodness.

http://player.cnevids.com/embed/5446f00e61646d41b4130000/5176e89e68f9daff42000013

From hard sci-fi to something a little more human.

Into Dusk is a short film by Jason Ho. This one is cool, if only by demonstrating how much world building you can do with a blue screen and a few details. The action plays out in front of a window—needle-like skyscrapers pierce the skyline in the distance. Personal flying machines form midair freeways. A rime of ice frosts the window’s edges.

How high are we?

Inside the room, a collection of technology suggests a level beyond today’s capability. A woman is hooked up to some kind of life support machine, cables entering her chest Matrix-like. But this isn’t a hospital. Is it her home? A paid auto-care facility? Her partner jams a card into the machine to refill his (and her) credit, like a carnival ride.

The details themselves—medical technology that’s more invasive, not less, and flying cars—evoke a future that isn’t suggested by today’s technological trends. But the film also poses a different question: Will the things that make us human change drastically in the future, or will they essentially stay the same, as they have throughout history?

Into Dusk clearly chooses the latter by liberally employing close-ups of hands and eyes, visceral sounds of water and wet cloth on skin. A smoking man taking care of his ailing lover. Stroking her forehead, he throws a gun in a bag and exits the room to…do what?

Probably something desperate.

Image Credit: Jason Ho/Into Dusk/Vimeo

Transparent Graphene-Based Implants to Grant Clearer View of the Brain

Singularity Hub
Transparent Graphene-Based Implants to Grant Clearer View of the Brain

darpa-brain-sensor-graphene-11

A new implantable brain chip developed by the University of Madison-Wisconsin may help advance our understanding of the human brain. The chip is flexible, transparent, biocompatible—and uses a graphene sensor array just four atoms thick.

To understand a system, we have to observe it, and so far, observing the living brain has proven challenging. Current observation methods of structure and activity tend to interfere with each other, so we can choose structure or activity, but generally not both.

“Historically, we’ve kind of looked at one or the other: we either take high-resolution imaging to look at how the brain is structured, or we poke and prod it with electrodes to try and measure its activity,” Justin Williams, a UW-Madison professor of biomedical engineering and an author on a paper detailing the implant, told Motherboard.

Ideally, of course, researchers would like to look at both structure and activity at the same time. The problem? The implants used to directly measure signals in the brain tend to block the view of imaging techniques intent on recording brain structure.

Graphene, the much-lauded (but still experimental) supermaterial may offer a solution.

In its purest form, graphene is a one-atom-thick lattice of biocompatible carbon. The device’s sensor, comprised of just four sheets of graphene, is extraordinarily thin—this accounts for its flexibility and transparency. Further, because graphene is highly conductive it can be used as a sensor of electrical activity in the brain.

Taken together, the sensor is thin (and robust), transparent, flexible, and conductive—it can measure brain activity without interfering with other instruments.

The graphene sensor array is placed on a flexible plastic backing that conforms to tissue.

The graphene sensor array is placed on a flexible plastic backing that conforms to tissue.

“Our devices are transparent across a large spectrum—all the way from ultraviolet to deep infrared,” says Jack Ma, professor of electrical and computer engineering at UW-Madison. “We’ve even implanted them, and you cannot find them in an MR scan.”

Beyond traditional methods, graphene implants could also work with a brain research technique called optogenetics in which scientists genetically manipulate neurons to respond to light. And they might improve neuromodulation therapies where patients with brain disorders use electrical stimulation from implants to control symptoms.

“Despite remarkable improvements seen in neuromodulation clinical trials for such diseases, our understanding of how these therapies work—and therefore our ability to improve existing or identify new therapies—is rudimentary,” says Kip Ludwig, a program director for the NIH’s neural engineering research efforts.

Though the Darpa-funded project was primarily concerned with research, they have begun exploring medical device applications too. In one instance, they collaborated with the University of Illinois Chicago to build a contact lens with dozens of sensors to detect retina injury and perhaps, in the future, aid in early diagnosis of glaucoma.

To become widely used, researchers still need to learn how to make graphene cheaper. Currently, there is no industrial method for its production. However, as production techniques improve and the material becomes increasingly affordable and more widely available, graphene-based biosensors might prove useful throughout the body.

Image Credit: University of Madison-Wisconsin

Desktop Machine Carves Metal and Wood Like Butter

Singularity Hub
Desktop Machine Carves Metal and Wood Like Butter

carvey-desktop-cnc-machine-kickstarter-1

How many desktop 3D printers have we seen on Kickstarter in recent years? Too many to count. But 3D printing is only half of the digital manufacturing promise. Where 3D printing is additive—CNC machines, guided by digital designs, subtract material.

Give a CNC machine a digital file, and it’ll painstakingly sculpt your design from a solid block of material like some kind of robotic Leonardo Da Vinci.

But most CNC machines are big and expensive. They aren’t typically available to your average maker or tinkerer. Or if they are, they’re kits requiring assembly. Now, however, a new Kickstarter campaign is aiming to remedy the situation by offering an affordable, pre-assembled desktop carving machine called Carvey.

https://www.kickstarter.com/projects/carvey/carvey-the-3d-carving-machine-for-the-maker-in-all/widget/video.html

Carvey is an enclosed desktop CNC router. It accommodates a range of milling bits, has a build area of a foot by eight inches, carves up to a depth of 2.75 inches, and works with dozens of materials including woods, soft metals, plastics, waxes, and foams.

The machine uses its own proprietary web app, Easel, or the CAD, CAM, and machine control software of your choice. In Easel, users draw a 2D design, the software converts it to 3D, and after selecting a material, the machine carves away.

What might one make with Carvey?

The campaign shows silver jewelry, acetate and wood sunglass, a fiberboard speaker box, a walnut and silver metallic acrylic address sign, and an acrylic and birch circuit board and electronics enclosure. (And why couldn’t you download a file for a simple tool, say a wrench, or a replacement part and fabricate it at home?)

The campaign has raised almost five times its $50,000 goal with nearly a month to go.

The team say they’ve been developing Carvey for over a year and a half. They have a working prototype, and the Kickstarter will fund a manufacturing run. Early backers can get a machine for $1,999, later backers will pay $2,399. They’re aiming to fulfill orders by this time next year—but it’s a complex project, so, grain of salt.

Also, although Carvey’s software seems much more user friendly than standard 3D modeling software, we wonder if it’s still less for the average weekend crafter, more for makers with experience in design programs and trouble-shooting in the workshop.

And Carvey isn’t set up to do full 3D—so, don’t expect to sculpt super sci-fi motorcycle helmets from a hunk of metal. Five-axis routers, like the one from Daishin in the video below, still cost into the hundreds of thousands of dollars.

All that said? This is a pretty cool idea.

Desktop 3D printers are stuck on one material. They’re relatively slow. And the plastic they use is expensive. Carvey, on the other hand, is multi-material, looks to be pretty fast, and uses conventional materials at, presumably, conventional prices.

Sure, consumer 3D printing is still evolving, there are items you can’t make any other way, and Carvey may well have shortcomings that aren’t readily apparent. But it’s awesome that, for the cost of an (expensive) laptop, you could plug that same computer into a machine that precision-carves a solid block of metal in your den or garage.

Image Credit: Carvey/Kickstarter

When the Internet Sleeps

Singularity Hub
When the Internet Sleeps

sleep.100252

The internet is a little bit like an organism—a really huge organism, made up of over four billion IP addresses networked across the globe. How does the internet behave day to day? What are its natural cycles?

USC Viterbi School of Engineering project leader and computer science assistant professor, John Heidemann, decided to find out.

In collaboration with Lin Quan and Yuri Pradkin, Heidemann pinged 3.7 million IP address blocks—representing almost a billion unique IP addresses—every 11 minutes for two months earlier this year. They asked the simple question: When are these addresses active and when are they sleeping?

The team found some interesting trends. IP addresses using home WiFi routers in countries like the US and Western Europe were consistently on (or awake) around the clock, whereas addresses in Eastern Europe, South America, and Asia tended to cycle more regularly with day and night.

Why is this important? Think of it as a method for differentiating between a “sleeping” internet and a “broken” internet.

“This data helps us establish a baseline for the internet,” says Heidemann, “To understand how it functions, so that we have a better idea of how resilient it is as a whole, and can spot problems quicker.”

The simplest use of the data may be akin to a health checkup, but there might be other interesting research outcomes too. For example, an “always on” internet may correlate with economic development. Over the years, we might be able to track how countries are doing, adding internet data to other broad statistics like GDP.

You might also have noticed there are big holes in the map in Africa, Asia, and South America. These in part correlate to low-population areas—but they also show where internet coverage is still spotty. Indeed, billions around the world still lack regular internet access (a situation Google and Facebook are intent on remedying).

Heidemann’s map is intriguing, in part, because it’s a striking visual representation of just how connected the planet already is—and just how much more connected it is likely to become over next few years and decades.

Image Credit: USC Viterbi School of Engineering

Womb Transplant Leads to Successful Birth for Swedish Mother

Singularity Hub
Womb Transplant Leads to Successful Birth for Swedish Mother

womb-transplant-sweden-21

In 1978, Louise Joy Brown made human history as the first “test tube baby” ever born after her mother underwent a revolutionary fertility treatment known as in-vitro fertilization (IVF). In the following decades, several million children have been conceived using IVF, which has since become a routine treatment for those suffering from some common forms of infertility.

Recently, doctors at Sweden’s University of Gothenburg pushed the envelope a little further when they announced the results from over a decade of research into absolute uterine factor infertility. This type of infertility, which previously had no treatment, often results when a woman is born without a uterus or loses it to a disease like cancer.

In a paper published in the medical journal Lancet, the research team, led by Dr. Mats Brännström, describe delivery of a moderately pre-term but otherwise healthy baby boy to a 35-year-old Swedish woman with a transplanted womb. The woman suffered from congenital Rokitansky syndrome, meaning she was born without a uterus.

The uterus was donated by a 61-year-old woman and transplanted in a 10-hour surgery last year. Doctors monitored the recipient closely for a year and found she returned to a regular menstruation schedule in less than two months. Using IVF, the doctors produced a viable embryo and introduced it into the woman’s new uterus.

Though this procedure is considered a remarkable success, it wasn’t without complications. The woman had three mild cases of rejection, a common occurrence for transplant recipients. Although one of the cases of rejection appeared during the pregnancy, immunosuppressive drugs were used successfully to treat it.

womb-transplant-sweden-1Further, in the 32nd week of the pregnancy, the woman began suffering from pre-eclampsia, a disorder characterized by kidney dysfunction and high blood pressure. As it was, the mother was already missing one kidney.

Realizing the baby could also be harmed from this condition, the doctors followed clinical protocol and removed him earlier than planned via Caesarean section. The baby boy, who had a birth weight of 3 lbs, 14.6 oz was otherwise healthy, as was the mother.

Although it is not clear what caused the woman’s pre-eclampsia, women who have become pregnant via IVF are at increased risk for the condition. The doctors also speculated the age of the donor might have been a factor, though it is not conclusive.

Although uterus transplants had been tried before, this was the first time one resulted in a live birth. Previous attempts resulting in pregnancy ended in miscarriages. There are several other women in this current study who have also received a uterus transplant and are undergoing IVF in the hopes of getting pregnant.

The success of this transplantation brings hope for women who are infertile due to loss of a functional uterus and wish to carry their own baby. Though far from perfect, the study proves that a uterus can be transplanted to successfully carry a baby to term.

However, the doctors note wide availability of the procedure is likely still years away and partly dependent on the results of their study. Further, because it requires major surgery for both the donor and mother, ethical questions remain, and paid surrogacy (illegal in Sweden) remains a viable alternative in other parts of the world.

That said, the capability is still in its infancy. And just as IVF has become routine in a few decades, it’ll be interesting to see where this new procedure takes us.

Image Credit: Shutterstock.com

First Patient Receives Stem Cell Treatment for Degenerative Eye Disease

Singularity Hub
First Patient Receives Stem Cell Treatment for Degenerative Eye Disease

stem-cell-human-trial-11

Since stem cells were first hailed as a potential cure for a variety of diseases, we have witnessed setbacks, controversies, and failures. Now, however, human trials for the use of stem cells in treating degenerative eye disease have received the green light from an oversight committee in Japan after they agreed that the proposed treatment is safe.

The trial is especially significant because instead of employing controversial embryonic stem cells, researchers will instead use induced pluripotent stem (iPS) cells—that is, stem cells derived from the patient’s own body (in this case, the skin).

With the approval, a team of doctors, led by Dr. Yasuo Kurimoto, transplanted retinal tissue grown from a patient’s own induced pluripotent stem (iPS) cells in the hopes of treating her eye disease. The elderly woman suffers from age-related macular degeneration, one of the most common causes of visual impairment in people over 50.

The disease results when the retina detaches from the underlying tissue because of either the accumulation of cellular debris or the over-growth of blood vessels.

stem-cell-human-trial-2

Sheet of Retinal Pigment Epithelium (RPE) cells derived from human iPS cells.

To generate the transplanted tissue, Dr. Masayo Takahashi, from the Laboratory for Retinal Regeneration at the RIKEN Center for Developmental Biology in Kobe, Japan, took skin cells from the patient and coaxed them into pluripotent cells. These cells were then manipulated to become retinal tissue that were grown as thin sheets and used to replace the patient’s damaged tissue.

Though this treatment may not fully restore the patient’s vision, it is expected to at least prevent her from going fully blind. It also serves as an important proof-of-principle that iPS cells are safe and effective for use in treating degenerative eye disease.

Before human trials, the researchers tested the safety of iPS cells in mice and monkeys. They found tissue made from iPS cells did not cause immune reactions because it came from the same animal it was transplanted into. Further, they didn’t find evidence the cells grew into cancerous tissue, a serious concern regarding the use of stem cells.

Like embryonic stem cells, iPS cells have the potential to become any type of cell in the body. However, clinical use of iPS cells gets around the ethical and religious objections to the production and subsequent destruction of human embryos to harvest stem cells.

Close-up view of RPE cells derived from human iPS cells.

RPE cells derived from human iPS cells under the microscope.

When President Obama loosened federal restrictions on the use of embryos for research in 2009, he only removed the ban on embryos left over from in vitro fertilization (IVF) treatments. The Bush-era ban on the creation of embryos strictly for research is still in effect.

This is why research into the feasibility of iPS cells for therapeutic uses has received much support in the past few years. Indeed, the 2012 Nobel Prize in Physiology or Medicine was awarded to Dr. Shinya Yamanaka and Dr. John B. Gurdon for their discovery that differentiated cells could be induced to become pluripotent cells.

Now, two years later, we’re witnessing one of the first clinical applications of iPS cells.

Of course, only time will tell whether these cells live up to their potential to yield revolutionary treatments for a host of human diseases. For now, it will be important to continue funding these types of studies whenever they show promise in the clinic.

Image Credit: RIKEN

With Mindware Upgrades and Cognitive Prosthetics, Humans Are Already Technological Animals

Singularity Hub
With Mindware Upgrades and Cognitive Prosthetics, Humans Are Already Technological Animals

Jason-Silva-Cognitive-Prosthetics-41

In recent years, the surprising idea that we’ll one day merge with our technology has warily made its way into the mainstream. Often it’s couched in a combination of snark and fear. Why in the world would we want to do that? It’s so inhuman.

That the idea is distasteful isn’t shocking. The imagination rapidly conjures images of Star Trek’s Borg, a nightmarish future when humans and machines melt into a monstrosity of flesh and wires, forever and irrevocably leaving “nature” behind.

But let’s not fool ourselves with such dark fantasies. Humans are already technological animals; tight integration with our inventions is in our nature; and further increasing that integration won’t take place in some distant future—it’s happening now.

To observe our technological attachment, we need simply walk out the door. It’s everywhere, all around us—on the bus or train, at work, at home, in the bathroom, in bed—people gazing into screens, living digital lives right next to their ordinary ones.

In the Matrix, the experience is involuntary, a tool of control and oppression. In our world, it’s voluntary, and mostly about freedom, expansion, and expression. As Jason Silva recently noted, our devices augment our brains, like cognitive prosthetics.

In his latest video, Silva says we should go easy on those fervent fans lining up for the latest smartphone, “These are not trivial things, these are not fashion accessories—these are mindware upgrades.” The newest smart devices speed information processing, better organize our thoughts, more efficiently connect us with others.

Silva says a simple telephone collapses time and geography in a kind of “technologically mediated telepathy” as termed by David Porush. Smartphones and other connected devices do the same thing, of course, and at very nearly the speed of light. But the word smartphone fails to convey that the phone part is far less than half the equation.

Referring to Andy Clark’s book Natural-Born Cyborgs, Silva says, “The modern mind emerges in the feedback loops between brains and these tools that we create and the environment in which we create them. We’re thinking through our iPhones and Samsung phones. We’re thinking on the internet. We’re thinking on the page.”

This isn’t a physical merger with technology, but it is surely a psychological one.

Jason-Silva-Cognitive-Prosthetics-2And this deepening union of brains and devices—Silva’s feedback loops and mindware upgrades—is just the latest round. Man has been “merging” with technology since the beginning. It’s more or less our modus operandi. We exude technology. We live in it. It lives in us.

So, why is the concept so foreign?

When technology is accepted and absorbed into the culture, we no longer think of it as technology. Consider the café I’m sitting in as I write. What do I see?

Plates, cups, utensils, backpacks as extensions of our arms, hands, fingers. Chairs and bicycles extend and augment our legs and backs. Clothes as prosthetic fur—in the name of modesty, but also for warmth, camouflage, or sexual signaling. Glasses are eye prosthetics. Newspapers, books, and notepads are cognitive prosthetics.

All this without noting the most obvious items—laptops, smartphones, and tablets. Why are these latter so much more obviously technological? Because the older the technology, the more completely we’ve incorporated it, and the harder it is to see.

This techno-blindness is apparent in the way we approve of some technologies and not others. Worrying about social networks and internet addiction, we ask, “Why can’t people just pick up the telephone or read a book?” Maybe one day we’ll lament how much time people spend in virtual reality—television was so much healthier.

Indeed, already computing technology is rapidly being assimilated. I’m hemmed in by people enthralled by their devices, absorbed in another world for the moment. And at this point, it’s perfectly acceptable to balance a sense of awe with a note of skepticism.

What’s the downside? Might we not lose ourselves in our own creations, misplace our moral compass and wander the trackless paths of digital addiction? No doubt.

But this isn’t a good reason to end technological experimentation.

We’re free to make our own decisions about how to use technology. If you find your smartphone annoyingly ever-present—ban it in the bedroom. Leave it in your pocket at dinner. Go surfing, climbing, or camping. Use it wisely, don’t let it rule you.

Perhaps even greater technological integration—contact lens displays, embedded medical devices, brain-computer-interfaces—will rob us of the voluntary “off switch.” But I don’t think so. If a new technology is more trouble than it’s worth, few will adopt it.

In the meantime, choose wonder over fear and take a moment to marvel at the times we live in. As Louis CK says, “Even the shittiest cellphone in the world is a miracle.”

Image Credit: Shots of Awe

What We’re Reading Across the Web This Week (Through Oct 18, 2014)

Singularity Hub
What We’re Reading Across the Web This Week (Through Oct 18, 2014)

rogue robot repair

While we kept our eye on the next iteration of devices rolling out of Apple this week, the Hub team was also getting a steady dose of great articles challenging some of the very pillars of how we see technology shaping the future. It’s at times a sobering look at the challenges and consequences of modern progress, serving as a reminder of how much of an evolving landscape the world of emerging technologies truly is.

Enjoy this week’s batch of recommended stories!

ROBOTICS: Robot gets a driving lesson for DARPA challenge
Sharon Gaudin | Computerworld
“‘With speed comes a lot of uncertainty and instability. As roboticists, we like everything slow because we can control slow. As you get more and more into the dynamic range, you have to make sure all your algorithms get updated so you can handle the higher speeds.’”

COMPUTING: The Imminent Decentralized Computing Revolution
Gary Sharma | The Wall Street Journal
“The transition to a global system that is decentralized, distributed, anonymous, efficient, secure, permission-less, trustless, resilient, frictionless, almost free, with no single point of control and no single point of failure… seems inevitable.”

WEARABLES: The Next Hot Market For Wearable Tech: Grandma
Satta Sarmah | Fast Company
“As baby boomers age and seniors live longer, health industry folks have started talking about the ‘longevity economy’–an estimated $20 billion market opportunity for businesses to develop products that will provide health care services to older adults or help them live independently. ‘The new expectations of old age are what’s going to drive innovation in business, technology, and society,’ says Joe Coughlin, the director of MIT’s AgeLab.”

TECHNOLOGY: The Anti-Tech Tech Movement
Alexandra Ossola | Motherboard
“When you get an email, text or notification, your brain produces dopamine. This makes you feel good. But it also elicits what neuroscientists are calling a seeking behavior: you want more of it.”

CULTURE: Should Generation TED take a more sceptical view?
Julian Baggini | AEON Magazine
“There is a second, related contradiction, between the celebration of maverick freethinking and the web’s powerful domination by a handful of controlling big players….Google clearly struggles with this tension. It strives for an open web and still stands by its motto, ‘Don’t be evil’…But Google has now become a corporate behemoth with enormous power. Many campers expressed sotto voce a feeling that, in accepting Google’s hospitality, they had in some way supped with the devil.”

FOOD: Agriculture Is Becoming a ‘Model Citizen’
Gregory Goth | ACM
“‘The idea is to help progressive farmers, not lazy farmers,’ Rodriguez says. ‘It’s our work to set the right expectations, because when you have a robot, the quantity of data you receive is a lot more than you had before, and that information has to be leveraged. Otherwise, you’re not taking advantage of the reason to have a system like that.’”

[Image credit: robot repair courtesy of Shutterstock]

Last Chance to Contribute to Singularity University Documentary Crowdfunding Campaign

Singularity Hub
Last Chance to Contribute to Singularity University Documentary Crowdfunding Campaign

singularity-university-documentary-21

Time is running out to help make the upcoming Singularity University documentary, The University, as good as it can be. Last month, director Matt Rutherford announced a $30,000 Indiegogo campaign to fund the film’s finishing touches.

The campaign proved a bigger hit than expected, and thanks to a rush of generous contributions, it rapidly met its original goal and a later stretch goal of $50,000.

At the current level of funding, the film’s creators say the documentary will be polished and ready to rock. But in the campaign’s final days, they’re hoping to hit one more stretch goal of $90,000 to help with marketing and distribution.

In addition to a few new perks, if the campaign hits that last goal they’ll shoot an extra video featuring Peter Diamandis on a topic chosen by the film’s supporters.

The University charts the birth of Singularity University, documents the first programs, and follows the evolution of a group of early Singularity University startups.

It also includes interviews with Sir Martin Rees, Stephen Hawking, Michio Kaku, Chris Anderson, Craig Venter, Buzz Aldrin, and other famed thinkers and visionaries.

What is the Singularity University challenge? Come up with an idea for a business to positively impact a billion people in the next decade. Yes, that’s as intimidating as it sounds—but it’s resulted in startups in diverse areas from biotech, to drones, to space.

The University team explains, “It’s a brilliant challenge, pushing the students to create something bolder than just the next social networking app.”

They hope their film spreads the Singularity University spirit and message to a broader audience and inspires those who haven’t attended a program to become entrepreneurs, create companies, and positively impact millions, even billions, of people.

So, if you haven’t yet, check out The University Indiegogo page, and help them get the word out that, “The best way to teach the future is to invent it.”

Image Credit: The University Feature Documentary