May 2014
Mon Tue Wed Thu Fri Sat Sun
« Apr   Jun »
 1234
567891011
12131415161718
19202122232425
262728293031  

Month May 2014

Suspended Animation Goes Primetime: Say Goodbye To Death As We Know It

Singularity Hub
Suspended Animation Goes Primetime: Say Goodbye To Death As We Know It
suspended cryo

Death has always been something of a moving target.  Take, for example, the first edition of the Encyclopedia Britannica, published in 1768, that defined the term as “the separation of soul and body; in which sense it stands opposed to life, which consists in the union thereof.

But how can you tell when said separation occurs? Well, that’s a slightly more complicated procedure and one we still haven’t quite cracked. Thus, moving forward, and trying for an—um— more practical definition, we began to define the end of life by a series of cessations.

In the beginning, breath was life. Of course, this idea led to the obvious reversal, the cessation of breath, meant the cessation of life. But that didn’t last for long.

As our knowledge of biology improved, death became definable by the cessation of heart function. In other words, if you were out of pulse, you were out of time.

But advances in neuroscience, ideas about brain death, and the introduction of mechanical ventilators—with their ability to keep the heart pumping long after the brain had died—forced a society-wide reevaluation of  terms.

At issue were irreversible comas and the tricky legalities of organ harvesting (i.e. when is someone dead enough that we can borrow their kidneys to give to someone more alive and in need, but not too dead that those kidneys stop working). And it was just this issue that brought together an ad hoc committee at Harvard Medical School in 1968. In an effort to come up with a hard and practical definition of “irreversible coma,” the committee also established “brain death” as the best proof of life’s end.

The Harvard criteria spread from there, effectively becoming the accepted definition the world over. But once again, not so fast.

Back in 2002, researchers at the University of Michigan Hospital in Ann Arbor announced that they had—using pigs—found a way to remove all of the animal’s blood and replace it with cold saline solution, which induces rapid hypothermia and halts almost all cellular activity—essentially placing the pig into suspended animation.

Once again, our technology messed with our terminology. “After we did those experiments,” Peter Rhee, one of the main researchers involved, recently told New Scientist, “the definition of ‘dead’ changed. Every day at work I declare people dead. They have no signs of life, no heartbeat, no brain activity. I sign a piece of paper knowing in my heart that they are not actually dead. I could, right then and there, suspend them. But I have to put them in a body bag. It’s frustrating to know there is a solution.”

cryogenicsAnd that solution is finally being tested out in humans. As of March 29, 2014, a team of surgeons trained in this saline-cooling procedure is on emergency call at the UPMC Presbyterian Hospital in Pittsburgh, Pennsylvania. In this field trial of the technique, patients who arrive at the hospital after having suffered cardiac arrest after traumatic injury (i.e. gunshots) and do not respond to attempts to restart their heart will be cooled with saline  to about 10 degrees Celsius (50 Fahrenheit). Their cellular activity will stop. They will be “clinically dead.” But—if doctors can repair the trauma in roughly two hours—they are still capable of being revived.

In itself, this is amazing. This is two hours of suspended animation—which  has been the stuff of sci-fi for almost a century. Today it’s scientific fact.

But where things get really interesting is what happens tomorrow. As the technology progresses, it is not too much of a stretch to say those two hours of suspended animation will give way to four hours and eight hours and sooner or later whole days and weeks and months—in other words, we’ll have mastered artificial hibernation.

And there are plenty of good reasons to master this technique, with deep space exploration being at the top of most people’s lists. But what happens, say, when a spaceship on its way to the planet formerly known as Pluto, complete with a crew in hibernation, gets dinged by an asteroid and knocked off course and is thus lost before they land and can be reanimated. The crew spends years and years and years in artificial hibernation. So are they alive or are they dead?

Put differently, if we know this crew can be later revived, but centuries might pass before we can actually catch the ship and revive the crew, is it ethical for us to shoot up a death ray laser beam from Earth to destroy ship and put the crew out of their suspended misery?

Since that 1768 Encyclopedia Britannica definition, the entry for death has been rewritten over 30 times. You would assume that today, in a society that can measure effects down to the quantum level, that death’s definition would be fixed. But with suspended animation suddenly heading into human trails, when it comes to defining death, we’re still nowhere close to a straight answer.

[images: cryogenics courtesy of Shutterstock; induced hibernation/wikipedia]

3D Printers and Drones — The Best Mash-up Since Peanut Butter and Chocolate?

Singularity Hub
3D Printers and Drones — The Best Mash-up Since Peanut Butter and Chocolate?

drones, technology, 3d printing,U.S. FAA head Michael Huerta recently told NPR that businesspeople are begging to be allowed to use drones for commercial purposes. It seems every Tom, Dick and Harry can envision a great way to make use of the unmanned aerial vehicles, which cost just a few dollars an hour to operate.

The only tech product with a longer list of clever uses is the 3D printer — though its price point largely limits its uses to industry and high-tech.

And because one of the formulas for innovation is combining two useful technologies to see if the whole might be greater than the sum of its parts, it was only a matter of time before someone mounted a 3D printer to a drone.

Mirko Kovac, an aerospace engineer at the Imperial College London, has done just that.

Mounted on a quad-copter drone, a 3D printer is loaded with quick-drying polymer foam. The idea is that the drone will be able to perform basic repairs on hard-to-reach buildings or apply the sticky foam to help a second drone lift and carry off an item that may be too dangerous for humans to handle.

Up to now, drones have been tasked with taking cargo to a location. Amazon hopes to use drones to deliver underwear and books to its shoppers.

Believe it or not, the inspiration for mounting a 3D printer on a drone was mined from nature. Biomimicry has driven a lot of innovation in robotics, which is Kovac’s discipline. Adding a 3D printer to a drone came about as an unlikely way to emulate the swiftlet, a bird that builds its nest from its own saliva.

“Swiftlets are like beautiful flying factories that can navigate often treacherous, dark environments to find a suitable place to build nests. Amazingly, they carry inside of them all the materials they need to build their own home. We have taken these traits and adapted them in robotics. Robots that mimic these birds could have enormous benefits, helping humans in construction and in hazardous situations,” Kovac said.

Matternet, like Kovac, has envisioned using drones to help in disasters, though their plans have thus far remained limited to delivering useful items, such as medicine and food rations. (For 3D printers, disaster relief is already old hat.)

drones-CITY-FEATWith the foam-applying printer on board, Kovac says drones could also remove bombs or dispose of hazardous materials — say, radioactive waste from a crippled reactor. They could also inspect and repair hard-to-reach industrial facilities such as offshore wind farms.

Kovac’s drone, shown off at Imperial’s annual festival, is still very much a prototype. Currently, it can carry just 2.5 kilograms, though the next version could manage 40, according to Kovac. Nor does the drone have the on-board sensors that would allow it to fly safely outside laboratory conditions.

“For full autonomy, it would require onboard sensing and control plus algorithms to distinguish where to print, while also stabilizing its flight based on the scanned environment. While this is technically possible its not very easy and is one of the next steps in our development,” Kovac told Singularity Hub.

To make most use of the printer, future versions will also need a 3D scanner. And to reach the remote locations where it could be needed, the drone will require an expanded range. Kovac hopes to replace the battery with a solar fuel cell that would charge while the drone sits atop a tree, like a 21st-century bird.

Photos: New Scientist, University of Granada

Daily At-Home Lab Kits Now Available, But Are the Results Meaningful?

Singularity Hub
Daily At-Home Lab Kits Now Available, But Are the Results Meaningful?

technology, medical technology, mobile health, mhealth, microfluidics, quantified selfA number of trends are converging to encourage consumers to manage more of their health themselves and leave less of it up to a doctor. The cultural zeitgeist loves data, and mobile devices make it possible for us to track our personal data. Meanwhile, the cost of formal health care continues to rise, spurring people to find ways to avoid the doctor.

San Diego-based Cue is bringing DIY health care to new heights, with an at-home lab kit that runs five standard tests and displays the results in a mobile app. The at-home lab kit began pre-selling earlier this month and will ship about a year from now. The car stereo-sized reader and iPod-sized test cartridges cost $199 for early buyers and will retail for $300.

The user employs a nasal or mouth swab or a needle and places the “wand” into a cartridge — where it will be permanently sealed for safe disposal — and then inserts the cartridge into the reader. The lab tests inflammation via C-reactive protein, vitamin D levels, female fertility based on Luteinizing hormone, influenza virus, and testosterone levels.

technology, mobile health, quantified self, lab, microfluidicsThe app, which syncs via Bluetooth, tracks levels over time and makes connections between the results and the user’s activity and diet if he or she has recorded them. The app also includes social networking features, so if a member in a user’s narrow social circle tests positive for influenza, the user gets notified.

Miniaturized lab kits work by using microfluidic channels in sample cartridges to sort out and count particular molecules electronically. Microfluidic approaches mean there’s less blood required, and the lab machines can become much smaller and more automated. But Cue is, as far as we know, the first to put these tiny, tidy lab devices directly in the hands of consumers.

But how many people need near-daily lab tests? Are these the right tests? And do they measure overall health in the way Cue describes?

“What if you could hold the power of your health in your hands? You could recover from injuries faster, elevate your mood, and keep your immune system and bones strong by balancing your body,” the company promises in a promotional video.

Cue argues that the tests available at launch are among those most commonly ordered by doctors, making its product as useful as possible to as many consumers as possible. But the value of these lab tests performed on healthy people on a daily basis without medical supervision seems questionable.

For instance, C-reactive protein, while linked to inflammation, is used medically to assess the patient’s risk of heart disease, not overall wellness or recovery from workouts. And while vitamin D deficiency is a genuine medical problem, few doctors would recommend that patients get more sun to guard against it, when sun brings health risks of its own. (Sunscreen protects the skin, but can also prevent the body from boosting vitamin D production.)

technology, medical technology, cue, microfluidics, miniaturization, quantified self“Discover how exercise like sprinting and strength training can boost your natural testosterone levels, and receive recommendations to optimize your diet to amplify your performance both inside and outside the gym,” Cue promises. Dips in testosterone levels, while promoted by Big Pharma as an illness in need of treatment, are rarely a health concern on their own, and testosterone is not used medically as a proxy for physical fitness.

Using complex medical information in a consumer device is what got 23andMe in hot water with the FDA, eventually forcing the company to stop making health claims for its product. We suspect something similar will happen with Cue.

It sounds like Cue does, too. The device is available now under an investigational device exemption: Those who pre-order the device will be invited to participate in a study that the company hopes will be “an important part of Cue’s path to FDA clearance” the company says.

Even if it doesn’t prove its mettle scientifically, quantified self advocates and potentially those with autoimmune diseases in which inflammation plays a major role may choose to use the at-home lab, anyway.

Photos: Cue

Singularity Surplus: Robotic Furniture and Stuffed Animals

Singularity Hub
Singularity Surplus: Robotic Furniture and Stuffed Animals

ai, artificial intelligence, computers, technology
Advances in exponential technology happen fast — too fast for Singularity Hub to cover them all. This weekly bulletin points to significant developments to keep readers in the know.

R is for robot
One of the major problems we face globally, but especially in the United States, is inequality in education. Children whose parents talk and read to them more get an early advantage. Technology advocates have tried to supply new tools to expand educational access, but their success has been hindered by the fact that learning is, for humans who are social animals, a social activity. Some researchers at MIT are trying out social robots that talk and read to kindergarteners. The bots look like stuffed animals, make eye contact and adapt their level to the tykes’ performance.

Could 3D printing provide artificial kidneys?
For their senior project, Anson Ma’s engineering students at the University of Connecticut were asked to build an artificial kidney. Obviously, the kids didn’t have access to stem cell technology, so they went old school — sort of. They used a 3D printer to create a hollow shell with a biocompatible surface. For the inner parts of the kidney, where the filtering work takes place, the students found that the membrane they needed would push beyond the resolution that most printers can deliver and instead obtained a hollow fiber membrane. Could the kidney work? It’s possible: Artificial hearts came around long before stem cells and 3D printers did, and no one has yet managed to produce a functional kidney from stem cells.

3d printing, technology, medical research, artificial kidney, regenerative medicine

Open-source 3D printing
Autodesk, a maker of 3D modeling software, is making a play to expand 3d printing. It announced open-sourced 3D printer software and reference printer. What Android did for smartphones, the Spark printer software will do for 3D printing, Autodesk hopes. While there are already open-source printers, such as RepRap, on the market, none has the muscle of Autodesk behind it. Autodesk aspires to push the price of printers down, so more people use them and consequently need its modeling software. There are no plans to make that open source.

3d printing, technology, robots,

Robots bid farväl to assembling Ikea furniture
Furniture, particularly for small spaces or modular offices, can be a bear to assemble and move. Why not let robots do it—and to save space, make the robots double as the furniture? Scientists at the EPFL have designed robotic bricks that connect to one another and self-assemble. Users create the layout in a tablet app.

Ghost in the machine
Singularity Hub has covered several efforts to replicate the biology of the human brain with silicon chips to simultaneously improve computer technology and jumpstart research into the brain. But Randal Koene wants to upload our individual brains to silicon, measuring each activity and translating it into actionable computer code. This would make it possible for any one of us to live forever as a computer program, and, according to Koene, the more conventional methods at brain modeling make it almost possible to achieve his vision now. If you pinch the computer simulation, does it not say “ow”?

Photos: Brian McEntire / Shutterstock.com, Al Ferreira for UConn

Biotech’s Brave New World: Push One To Create Life; Push Two To Create Alien Life

Singularity Hub
Biotech’s Brave New World: Push One To Create Life; Push Two To Create Alien Life

dna banner

It’s been a good month for miracles. And by miracles I mean our oldest miracle, that first miracle, the creation of life itself.

During these first weeks in May, two separate teams working at two separate institutions announced that when it comes to creating life from scratch, well, there are a couple of new gods in town.

Of course, if we’re talking about creating life from scratch, we most first mention the old gods, which is to say, this is when biologist Craig Venter comes into the discussion.

A few decades back, while he was working to read the human genome (i.e. the Human Genome Project), Venter also began wondering what it would take to write one. He wanted to know, “what does the minimal genome required for life look like?”

Back then, DNA synthesis technology was too crude and expensive to consider writing a minimum genome for life, but exponential advances in biotechnology obliterated these problems.  Consider “synthetic biology,” which moves the work from the molecular to the digital. In syn-bio, genetic code is manipulated using the equivalent of a word processor. With the press of a button, DNA can be cut and pasted, effortlessly imported from one species into another. Single letters can be swapped in and out with precision. And once the code looks right? Just hit send. A dozen different DNA print shops can now turn these bits into biology.

DNA replication or DNA synthesis is the process of copying a double-stranded DNA molecule. This process is paramount to all life as we know it. (Photo credit: Wikipedia)

DNA replication or DNA synthesis is the process of copying a double-stranded DNA molecule. This process is paramount to all life as we know it. (Photo credit: Wikipedia)

In May of 2010, with the help of these new tools, Venter answered his question: creating the world’s first self-replicating, synthetic chromosome.

To pull this off, he used a computer to design a novel bacterial genome (over a million base pairs in total). Once the design was complete, the code was emailed to Blue Heron Biotechnology, who specializes in synthesizing DNA from digital blueprints. Blue Heron took Venter’s blueprint and returned a vial filled with freeze-dried strands of the DNA. Just as one might load an operating system into a computer, Venter inserted the synthetic DNA into a host bacterial cell that had been emptied of its own DNA. Soon, the cell “booted up,” that is, began generating proteins, starting to metabolize, grow, and, most importantly, divide. One cell became two, two became four, four became eight. And each new cell carried only Venter’s synthetic instructions. For all practical purposes, it was an altogether new life form, created from scratch. Venter called it “the first self-replicating species we’ve had on the planet whose parent is a computer.”

This in itself was huge news—but it was also yesterday’s news.

This month, Autodesk, the design and engineering software company, booted up a synthetic bacteriophage—aka a virus—then 3-D printed the result.  What is a software design company doing in the virus business? “Well,” says Andrew Hessel, distinguished researcher, “we’re considering the possibility that you can write software for living things with bio-code (aka DNA).”

And the craziest part—Venter’s effort took five years of research, Autodesk, meanwhile, took two weeks and about $1000.

At roughly the same time this work was going on, over at Scripps Research Institute, in La Jolla, California, scientists succeeded in creating the very first organism with “alien” DNA, what the researchers involved called: “a semi-synthetic organism with an expanded genetic alphabet.”

Extreme Tech has a great description of the work:

“In normal DNA, which can be found within the genes of every organism, the twin strands of the double helix are bonded together with four bases, known as T, G, A, and C. In this new organism, the researchers added two new bases, X and Y, creating a new form of DNA that (as far as we know) has never occurred after billions of years of evolution on Earth or elsewhere in the universe. Remarkably, the semi-synthetic alien organism continued to reproduce normally, preserving the new alien DNA during reproduction.”

So what does all this mean going forward? Hard to say for sure. But one thing for certain, sooner or later, we’re going to need a new definition of the miraculous.

*For more of Steven’s work, check out his latest book, The Rise of Superman.

[Images: wikipedia; DNA courtesy of Shutterstock]

Exploring the World of Quadruped Robots Post-Acquisition of Boston Dynamics

Singularity Hub
Exploring the World of Quadruped Robots Post-Acquisition of Boston Dynamics

HyQ-quadruped-robot 3
For years, Boston Dynamics has been the undisputed heavyweight champ of viral robot videos. Four-legged robots, like BigDog, LS3, Cheetah, and Wildcat were their bread and butter while two-legged bots, like Petman and Atlas, supplied wonder and fear in equal measure. But now? Boston Dynamics is no more.

The firm was snapped up in Google’s 2013 robotics buying spree, and while the team is presumably intact and working hard as ever—exactly what they’re working on is less visible. Following a consistent stream of videos, the company’s YouTube channel hasn’t been updated in seven months.

Luckily, you can still get your four-legged robot video fix elsewhere, even if none are yet quite as advanced. The Italian Institute of Technology’s HyQ, for example, is one of the most capable quadruped robots out there right now.

HyQ has multiple gaits and can adapt to uneven, unpredictable terrain (the prime benefit of legged robots over wheeled robots). HyQ’s most recent video is reminiscent of early Boston Dynamics videos. The robot traverses obstacles and automatically stabilizes itself after being smacked by a punching bag.

Meanwhile, MIT is working on an heir to Boston Dynamics’ Cheetah. The MIT bot has a smooth gait that can switch mid-stride switch from trotting to galloping and is capable of speeds up to 13 mph (about half the Boston Dynamics bot).

While Boston Dynamics’s Cheetah morphed into Wildcat and was released into the wild (parking lot) last year, MIT’s cheetah-bot is still firmly supported and tethered as it jogs on the treadmill. That said, its electric motors (instead of hydraulic actuators) approach the efficiency of a real cheetah, and MIT says it can run on batteries alone (as opposed to a combustion engine) for over an hour at a speed of 5 mph.

The Swiss Cheetah-Cub robot developed by EPFL moves unsupported without a treadmill (though it’s still tethered for power) and can switch its gait from walking to a trot. Cheetah-Cub is more diminutive—about the size of a house cat—and at 1.3 mph lacks the speed of the other robots. However, they’re working on a flexible spine to enable a third gait (gallop), which will hopefully add a little extra speed too.

Osaka University’s quadruped robot Pneupard is slow, jerky, and lacks a brain. But that’s okay. The team behind Pneupard is experimenting with a different form of actuation. Instead of using motors to move the bot’s joints, Pneupard is pneumatic—air is piped in and controlled in the muscles by onboard valves.

One benefit of pneumatic muscles, the team notes, is they deform without braking on contact with the floor, naturally adapting to changes. Such softness brings their structure closer to that of animal muscles.

I’ll admit, none of these bots get the blood pumping quite like a few of Boston Dynamics’ more notable creations. But none have Boston Dynamics’ level of funding or history (over two decades). While we expect big things from Boston Dynamics and Google (would a few videos hurt the cause?), open projects like these (and no doubt others) are more than enough to keep things fresh and moving in the wider world of robotics.

Image Credit: Italian Institute of Technology/YouTube

Gecko-Inspired Adhesive Sticks 700 Pounds to a Wall

Singularity Hub
Gecko-Inspired Adhesive Sticks 700 Pounds to a Wall

gecko-toes-gecksin-adhesive 1

Geckos are the ultimate climbers. Microscopic hairs on their toes enable the lizards to climb just about anything using the molecular attraction (or Van der Waals forces) between their feet and a surface. Although the lizards are firmly stuck to the surface, they can also easily, repeatedly raise and replace their feet to move.

For the past few years, a group at the University of Massachussetts Amherst have been reverse engineering the gecko to make an adhesive called Geckskin. An index-card sized piece of Geckskin supports up to 700 pounds on smooth surfaces (like glass) and can be easily removed, leaving no residue.

Adhesives of similar strength include permanent materials like epoxies. But Geckskin can be applied and reapplied on a variety of surfaces without losing its stickiness.

Given extraordinary capabilities, you might expect Geckskin to be comprised of some new miracle material. Not so. Geckskin is made using readily available, cheap materials—nylon, bathroom caulking, and carbon fibre or cotton, for example.

The project is led by Al Crosby, a materials scientist and engineer, and Duncan Irschick, an integrative biologist and innovator whose gecko research helped reveal the amazing adhesive forces at play in gecko toe-pads. PhD candidates Mike Bartlett and Daniel King helped develop Geckskin and coauthored a 2012 paper on the material.

Irschick told the Guardian, “We aren’t creating a new material that requires some crazy nanotechnology and is going to cost millions to produce. The creation is simple in its conception and profound in the way it’s put together.”

Geckskin has two prime components, a soft pad woven into a stiff fabric backing. The stiff backing is designed to mimic the tendons, which, along with the lizards’ amazing toe-pads, are crucial to their climbing abilities. Because previous attempts at reverse engineering the gecko focused primarily on those microscopic hairs—instead of taking into account the whole package—they proved hard to scale.

When under tension (supporting a weight) Geckskin’s full adhesive strength is applied to the load. However, by removing the weight and flicking the material up, it easily disengages. Unlike tape, of course, it leaves nothing behind and doesn’t lose its stickiness with repeated use. You kind of have to see it to believe it.

Check out this video of the Geckskin guys sticking a monitor to a variety of surfaces.

It’s a cool invention, and maybe you already know exactly how you might use it. But it’s worth discussing a few applications. First and foremost? Hanging stuff.

Flat panel televisions look great on the wall, but they require sizeable, structurally sound screws and a few big holes in your wall. Once mounted, they’re more or less permanent. A television equipped with Geckskin would make the mounting process easy and the moving process easier. And of course, its not just televisions, you could hang pretty much anything heavy without destroying the wall in the process.

Geckskin might also be great material for climbing robots.

Stanford’s gecko-inspired StickyBot, for example, was conceived back in 2006. The robot climbs smooth surfaces using its own special synthetic gecko-like dry adhesive. That is, micro-hairs and Van der Waals forces allow it to climb wood paneling, painted metal, and glass. Carnegie Mellon’s Waalbot works on a similar premise.

But never mind robots, what about humans? Who wouldn’t want a set of Geckskin gloves and shoes? The project, in fact, received funding from DARPA for its Z-Man project to enable soldiers to scale vertical walls in urban warfare without the use of ropes and ladders. The agency explicitly lists geckos and spiders as inspiration for the project.

So then, when can we expect to get a few swatches of Geckskin? Though the team received a flood of inquiries from interested companies after they first announced the product, they say they’re still fine-tuning their creation. Crosby and Irschick think it’ll be at least a year to get a commercial product ready for the market.

Like any new material, taking it from lab to hardware store isn’t a perfectly straight path, and potential doesn’t always translate into a perfect consumer product. Whether Geckskin becomes the new velcro remains to be seen, but in any case, it’s a great example of how nature can drive great engineering.

Irschick says, “Our design for Geckskin shows the true integrative power of evolution for inspiring synthetic design that can ultimately aid humans in many ways.”

Image Credit: University of Massachusetts/YouTube

Researchers Get Closer to Making Functional Human Sperm in the Lab

Singularity Hub
Researchers Get Closer to Making Functional Human Sperm in the Lab

sperm, stem cells, fertility, medical research, infertilityAdvances in stem cells have allowed scientists to cultivate various types of tissue in the lab, yet some tissue types have remained out of reach. The germ cells that produce sperm, whose cell development processes is one of the most specialized, are among those that have eluded researchers. But successfully generating sperm from stem cells would allow infertile men to become fathers, where current efforts to expand fertility to more people for longer stretches of their lives have focused primarily on women.

A recent Stanford University study, published in Cell Reports, suggests that the problem may be simpler than previously thought. Researchers found that simply by producing stem cells from adult male skin cells and putting them in the sperm-making tubes of mice, they could obtain partly developed germ cells. The researchers hypothesized that if the cells had been placed in human testes, with their distinct and roomier topography, they would likely have resulted in functional sperm.

stem cells, technology, medical research, sperm, artificial sperm, infertilityA previous study led by researcher Renee Reijo Pera, an author of the recent paper, had found it possible to make working germ cells starting with embryonic stem cells and cultivating them in vitro. But that process required some heavy-handed genetic engineering to work, which means it wouldn’t make healthy babies. And if embryonic stem cells did produce functional sperm, it would carry the genetics of the embryo, not a prospective father.

Japanese studies have explored other, more radical, methods to create sperm in the lab using mice. In one effort, researchers produced functional mouse sperm from newborn males’ gonads. In another, they produced sperm and egg from adult skin cells from animals of both sexes. The first method can’t ethically be tried on humans, and the second also resulted in genetic problems. However, the second method could make it possible for a lesbian couple to have a baby that belongs genetically to both partners.

The Stanford study was more narrowly focused on male infertility. It compared the development of sperm cells from induced stem cells of men who suffer from azoospermia, a genetic form of infertility in which a man doesn’t make sperm, with those of normally fertile men. To the researchers’ surprise, the infertile men’s skin cells did develop germ cells, though the fertile men’s cells produced 50-100 times as many.

“This research provides an exciting and important step for the promise of stem cell therapy in the treatment of azoospermia, the most severe form of male factor infertility. While the study clearly demonstrates the importance that genetics play in spermatogenesis, it also suggests that some of these limitations could potentially be overcome,” said Michael Eisenberg, a Stanford urologist who wasn’t involved in the study.

Lab_mouse_mg_3263Researchers believe that men with azoospermia at one point produce sperm but later lose the supply of stem cells needed for the process. It might therefore be possible to give men with the genetic condition at least a limited window of fertility by collecting and freezing tissue when they are young and later implanting it into their testes.

(Buyer beware: The stem cells that didn’t land in the mouse’s tubules developed into tumors rather than germ cells.)

Being able to produce our procreative cells in the lab will also provide a useful tool in ongoing studies of how those most important cells develop, and will likely providing insights useful for various attempts to make sperm in the lab.

“Our dream is to use this model to make a genetic map of human germ-cell differentiation, including some of the very earliest stages,” Reijo Pera said.

Photos: Joshua Resnick / Shutterstock.com, MSU photo by Kelly Gorham, Rama via Wikimedia Commons

NeuroGrid — A Circuit Board Modeled after the Human Brain

Singularity Hub
NeuroGrid — A Circuit Board Modeled after the Human Brain

bioengineering, nueromorphic computing, neural networks, AI, artificial intelligence, human brainAlthough the basic computer architecture we rely on was designed to handle math and logic problems, it’s done a bang-up job of tackling everything from word processing and socializing to controlling the movements of artificial limbs. But as we demand increasingly human-like work from machines, pressure is mounting to rejigger and expand their basic architecture to better jibe with the brain’s way of doing things.

If we ever want to be able to run a computer that simulates the hundred billion neurons at work in a human brain, though, each of its silicon chips will have to sip, not gulp, energy. And while computers will have to process information through pathways more organic and complex than the classic von Neumann architecture, they will have to keep up a demanding pace.

Eying those problems on the horizon, a team of Stanford University engineers led by Kwabena Boahen has developed a circuit board, and its underlying chips, that simulates the activity of a million neurons 9,000 times faster than a personal computer could and is 100,000 times more energy efficient. They reported the findings in a recent issue of IEEE.

The circuit board, called Neurogrid, consists of 16 custom-designed Neurocore chips. Each chip simulates 65,536 neurons. All told, the board can simulate 1 million neurons and billions of synaptic connections.

By way of comparison, the neural chips IBM uses consist of 256 digital neurons and a couple hundred thousand synaptic connections. Qualcomm has announced, but not yet launched, a neuromorphic chip.

“From a pure energy perspective, the brain is hard to match,” Boahen said in a news release. With 80,000 times more neurons than Neurogrid, the brain uses just three times the power, according to the article.

“The computer can compensate for its lack of parallelism by executing instructions blazingly fast, but it pays a steep cost in energy and time to shuttle far-flung data through its central processing unit,” Boahen and colleagues explain on the Neurogrid project website.

neuromorphic computing, chip, circuit board, neural networks, artificial intelligence, AIHere’s how the system works, briefly: The neuromorphic chip is analog to capture the variety of ionic signals the neurons filter in the brain. It includes shared electronic circuits to maximize “synaptic” connections, which are represented by way of pairing a particular ersatz-neuron’s address on the chip to a particular spot in the RAM. To maximize speed, the circuit board retains a tree rather than a mesh network.

While the chip would save money in energy use, it’s not cheap. The prototype cost $40,000 to make. But Boahen said that adopting modern industrial manufacturing processes and producing in bulk would bring the price down to $400.

Neurocore isn’t made for the garage user, anyway. Like most neural chips and networks, it’s designed to model brain activity for researchers. To accommodate different research objectives, it can be programmed to act like different cortical layers.

But lower power usage isn’t just a cost issue. It would free future human-acting robots from cumbersome power supplies and could eventually make it safe to implant computers in human patients.

Boahen plans to test the human-like capabilities of the chip by using it to power robotic limbs.

“It is important for us to show that the components of our system replicate their biological analogs. Otherwise, we will have a difficult time convincing others that our artificial brain is working like the human brain,” the Neurogrid website concludes.

Photos: Stanford University

Neurogames are Ready to Take Flight — Expect a Breakout Year Ahead

Singularity Hub
Neurogames are Ready to Take Flight — Expect a Breakout Year Ahead

neurogaming-big

“We’re very close.”

In just three words, Palmer Luckey of OculusVR fame, perfectly summarized not only where virtual reality stands, but perhaps the entire neurogaming industry. Luckey was on hand to present with other industry leaders for the 2nd edition of the NeuroGaming Conference, an annual event in San Francisco. Last year’s conference signaled the birth of an industry segment that should forever replace traditional gaming as we’ve known it. Sales of videogames for casual gamers are in decline, but a new and ultimately more meaningful form of gaming has already taken shape to replace them.

But what are neurogames exactly?

Zack Lynch, the conference organizer, describes them as “a collection of technologies that incorporate your full nervous system into gameplay.” Specifically, that could include wearable devices like EEG headsets and heart-rate monitors along with biofeedback platforms to do things like eye tracking and brain wave sensing. Add these inputs to platforms like virtual and augmented reality, and developers now have the opportunity to create fully immersive experiences that were never before possible.

PrioVR, one of the most exciting new technologies on hand for demonstration, showcases just how much has changed this year. The company recently completed a successful Kickstarter campaign and plans to ship to backers this fall. The system, fully compatible with devices like the Oculus Rift, comprises a full body-tracking suit, which allows the gamer to move around inside a virtual world.

Imagine playing a round of golf at the Masters’ Augusta National. Now imagine doing that in your bedroom wearing footed pajamas.

Like everything Moore’s Law, it’s been the miniaturization of the components that has driven the industry forward this year. PrioVR isn’t an awkward system of clunky equipment, but rather a few sensors placed on key parts of your body. As hardware costs continue to free-fall, expect more technology integration going forward.

CastAR, another Kickstarter that made the elite millionaires club, combines true virtual reality with augmented reality, to create what they call “projected reality.” Their device creates 3D holographic projections right in front of you (much like the famous holographic chess scene from Star Wars).

Perhaps the most intriguing example of integration is the work of Neuroelectrics out of Barcelona. Ana Maiques, Neuroelectrics CEO, had the crowd salivating over the Starstim tCS stimulator that has been designed for use with the Oculus Rift. If you’re not already familiar with the growing body of research around tCS, you may want to brush up on the subject before reading on. The prospect of virtual reality training with the cognitive boost of tCS, could create armies of experts in no time. A pajama clad bedroom golfer, might perfect that golf swing in half the time! On a more meaningful level, this could accelerate learning in incredible ways.

Given how quickly we’ll need to retrain our workforce in the future, this could have serious implications for the future of training and development.

Currently, Starstim is going through regulatory review and is available for research purposes only. This highlights a growing distinction between casual neurogames, and those being developed with clinical interventions in mind.

neurogaming2

Already, a number of neurogames are being prescribed as therapies to treat issues like PTSD, ADHD, and other cognitive and behavioral issues. We’re not terribly far away from a world in which devices like Melon might replace today’s pharmaceuticals to treat depression. Their headband device monitors a user’s brainwave activity and coaches them to enhance focus and relaxation.

No medicine comes without some side effects. While it’s true that neurogames offer new ways of treating debilitating psychological conditions, there are growing sets of ethical concerns that remain. We’re only a few hardware upgrades away from google glass like devices capable of measuring your mood in real time. No one is quite sure if thought-based advertising that might respond directly to your emotional state is a good idea. Additionally, all of these devices will be recording and storing our emotional data. Where it’s stored, who owns the data, and who has access should pose some debate going forward.

Ethical issues aside, it’s clear from the conference, that the entire industry is ready to pop. When that happens and who fires the first shot is anybody’s guess. Of course, following their $2B acquisition by Facebook in March, Vegas odds are on OculusVR to lead the way. For now, an ever-growing army of charitable Kickstarter backers and early adopters will continue to carry the industry over the chasm and into mainstream society. Neurogames are coming, and when they do, the world won’t quite be the same.

[Images: flickr/Alexander C. Marcelo Photography]