miércoles, 18 de octubre de 2017

Stunning AI Breakthrough Takes Us One Step Closer to the Singularity

As a new Nature paper points out, “There are an astonishing 10 to the power of 170 possible board configurations in Go—more than the number of atoms in the known universe.” (Image: DeepMind)
Remember AlphaGo, the first artificial intelligence to defeat a grandmaster at Go?
Well, the program just got a major upgrade, and it can now teach itself how to dominate the game without any human intervention. But get this: In a tournament that pitted AI against AI, this juiced-up version, called AlphaGo Zero, defeated the regular AlphaGo by a whopping 100 games to 0, signifying a major advance in the field. Hear that? It’s the technological singularity inching ever closer.

A new paper published in Nature today describes how the artificially intelligent system that defeated Go grandmaster Lee Sedol in 2016 got its digital ass kicked by a new-and-improved version of itself. And it didn’t just lose by a little—it couldn’t even muster a single win after playing a hundred games. Incredibly, it took AlphaGo Zero (AGZ) just three days to train itself from scratch and acquire literally thousands of years of human Go knowledge simply by playing itself. The only input it had was what it does to the positions of the black and white pieces on the board.
  • In addition to devising completely new strategies
  • the new system is also considerably leaner and meaner than the original AlphaGo.
Lee Sedol getting crushed by AlphaGo in 2016. (Image: AP)
Now, every once in a while the field of AI experiences a “holy shit” moment, and this would appear to be one of those moments. Looking back, other “holy shit” moments include:
This latest achievement qualifies as a “holy shit” moment for a number of reasons.

First of all, the original AlphaGo had the benefit of learning from literally thousands of previously played Go games, including those played by human amateurs and professionals. AGZ, on the other hand, received no help from its human handlers, and had access to absolutely nothing aside from the rules of the game. Using “reinforcement learning,” AGZ played itself over and over again, “starting from random play, and without any supervision or use of human data,” according to the Google-owned DeepMind researchers in their study. This allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from experience. This basically means that AlphaGo Zero was its own teacher.

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” notes the DeepMind team in a release. “Instead, it is able to learn tabula rasa [from a clean slate] from the strongest player in the world: AlphaGo itself.

When playing Go, the system considers the most probable next moves (a “policy network”), and then estimates the probability of winning based on those moves (its “value network”). AGZ requires about 0.4 seconds to make these two assessments. The original AlphaGo was equipped with a pair of neural networks to make similar evaluations, but for AGZ, the Deepmind developers merged the policy and value networks into one, allowing the system to learn more efficiently. What’s more, the new system is powered by four tensor processing units (TPUS)—specialized chips for neural network training. Old AlphaGo needed 48 TPUs.

After just three days of self-play training and a total of 4.9 million games played against itself, AGZ acquired the expertise needed to trounce AlphaGo (by comparison, the original AlphaGo had 30 million games for inspiration). After 40 days of self-training, AGZ defeated another, more sophisticated version of AlphaGo called AlphaGo “Master” that defeated the world’s best Go players and the world’s top ranked Go player, Ke Jie. Earlier this year, both the original AlphaGo and AlphaGo Master won a combined 60 games against top professionals. The rise of AGZ, it would now appear, has made these previous versions obsolete.

The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.

This is a major achievement for AI, and the subfield of reinforcement learning in particular. By teaching itself, the system matched and exceeded human knowledge by an order of magnitude in just a few days, while also developing 
  • unconventional strategies and 
  • creative new moves
For Go players, the breakthrough is as sobering as it is exciting; they’re learning things from AI that they could have never learned on their own, or would have needed an inordinate amount of time to figure out.
[AlphaGo Zero’s] games against AlphaGo Master will surely contain gems, especially because its victories seem effortless,” wrote Andy Okun and Andrew Jackson, members of the American Go Association, in a Nature News and Views article. “At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic... The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.”
No doubt, AGZ represents a disruptive advance in the world of Go, but what about its potential impact on the rest of the world? According to Nick Hynes, a grad student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), it’ll be a while before a specialized tool like this will have an impact on our daily lives.

So far, the algorithm described only works for problems where there are a countable number of actions you can take, so it would need modification before it could be used for continuous control problems like locomotion [for instance],” Hynes told Gizmodo. “Also, it requires that you have a really good model of the environment. In this case, it literally knows all of the rules. That would be as if you had a robot for which you could exactly predict the outcomes of actions—which is impossible for real, imperfect physical systems.

The nice part, he says, is that there are several other lines of AI research that address both of these issues (e.g. machine learning, evolutionary algorithms, etc.), so it’s really just a matter of integration. “The real key here is the technique,” says Hynes.

It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel...Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.
As expected—and desired—we’re moving farther away from the classic pattern of getting a bunch of human-labeled data and training a model to imitate it,” he said. “What we’re seeing here is a model free from human bias and presuppositions: It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same. It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel,” to which he added: “Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.

Noam Brown, a Carnegie Mellon University computer scientist who helped to develop the first AI to defeat top humans in no-limit poker, says the DeepMind researchers have achieved an impressive result, and that it could lead to bigger, better things in AI.

While the original AlphaGo managed to defeat top humans, it did so partly by relying on expert human knowledge of the game and human training data,” Brown told Gizmodo. “That led to questions of whether the techniques could extend beyond Go. AlphaGo Zero achieves even better performance without using any expert human knowledge. It seems likely that the same approach could extend to all perfect-information games [such as chess and checkers]. This is a major step toward developing general-purpose AIs.

As both Hynes and Brown admit, this latest breakthrough doesn’t mean the technological singularity—that hypothesized time in the future when greater-than-human machine intelligence achieves explosive growth—is imminent. But it should cause pause for thought. Once 
  • we teach a system the rules of a game or 
  • the constraints of a real-world problem, 
the power of reinforcement learning makes it possible to simply press the start button and let the system do the rest. It will then figure out the best ways to succeed at the task, devising solutions and strategies that are beyond human capacities, and possibly even human comprehension.

As noted, AGZ and the game of Go represent an oversimplified, constrained, and highly predictable picture of the world, but in the future, AI will be tasked with more complex challenges. Eventually, self-teaching systems will be used to solve more pressing problems, such as protein folding to conjure up new medicines and biotechnologies, figuring out ways to reduce energy consumption, or when we need to design new materials. A highly generalized self-learning system could also be tasked with improving itself, leading to artificial general intelligence (i.e. a very human-like intelligence) and even artificial superintelligence.

As the DeepMind researchers conclude in their study, “Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.

And indeed, now that human players are no longer dominant in games like chess and Go, it can be said that we’ve already entered into the era of superintelligence. This latest breakthrough is the tiniest hint of what’s still to come.


ORIGINAL: Gizmodo 
By George Dvorsky 

jueves, 14 de septiembre de 2017

IBM Makes Breakthrough in Race to Commercialize Quantum Computers

Photographer: David Paul Morris
Researchers at International Business Machines Corp. have developed a new approach for simulating molecules on a quantum computer.

The breakthrough, outlined in a research paper to be published in the scientific journal Nature Thursday, uses a technique that could eventually allow quantum computers to solve difficult problems in chemistry and electro-magnetism that cannot be solved by even the most powerful supercomputers today.

In the experiments described in the paper, IBM researchers used a quantum computer to derive the lowest energy state of a molecule of beryllium hydride. Knowing the energy state of a molecule is a key to understanding chemical reactions.

In the case of beryllium hydride, a supercomputer can solve this problem, but the standard techniques for doing so cannot be used for large molecules because the number of variables exceeds the computational power of even these machines.

The IBM researchers created a new algorithm specifically designed to take advantage of the capabilities of a quantum computer that has the potential to run similar calculations for much larger molecules, the company said.

The problem with existing quantum computers – including the one IBM used for this research -- is that they produce errors and as the size of the molecule being analyzed grows, the calculation strays further and further from chemical accuracy. The inaccuracy in IBM’s experiments varied between 2 and 4 percent, Jerry Chow, the manager of experimental quantum computing for IBM, said in an interview.

Alan Aspuru-Guzik, a professor of chemistry at Harvard University who was not part of the IBM research, said that the Nature paper is an important step. “The IBM team carried out an impressive series of experiments that holds the record as the largest molecule ever simulated on a quantum computer,” he said.

But Aspuru-Guzik said that quantum computers would be of limited value until their calculation errors can be corrected. “When quantum computers are able to carry out chemical simulations in a numerically exact way, most likely when we have error correction in place and a large number of logical qubits, the field will be disrupted,” he said in a statement. He said applying quantum computers in this way could lead to the discovery of new pharmaceuticals or organic materials.

IBM has been pushing to commercialize quantum computers and recently began allowing anyone to experiment with running calculations on a 16-qubit quantum computer it has built to demonstrate the technology.

In a classical computer, information is stored using binary units, or bits. A bit is either a 0 or 1. A quantum computer instead takes advantage of quantum mechanical properties to process information using quantum bits, or qubits. A qubit can be both a 0 or 1 at the same time, or any range of numbers between 0 and 1. Also, in a classical computer, each logic gate functions independently. In a quantum computer, the qubits affect one another. This allows a quantum computer, in theory, to process information far more efficiently than a classical computer.

The machine IBM used for the Nature paper consisted of seven quibits created from supercooled superconducting materials. In the experiment, six of these quibits were used to map the energy states of the six electrons in the beryllium hydride molecule. Rather than providing a single, precise and accurate answer, as a classical computer does, a quantum computer must run a calculation hundreds of times, with an average used to arrive at a final answer.

Chow said his team is currently working to improve the speed of its quantum computer with the aim of reducing the time it takes to run each calculation from seconds to microseconds. He said they were also working on ways to reduce its error rate.

IBM is not the only company working on quantum computing. Alphabet Inc.’s Google is working toward creating a 50 qubit quantum computer. The company has pledged to use this machine to solve a previously unsolvable calculation from chemistry or electro-magnetism by the end of the year. Also competing to commercialize quantum computing is Rigetti Computing, a startup in Berkeley, California, which is building its own machine, and Microsoft Corp. which is working with an unproven quantum computing architecture that is, in theory, inherently error-free. D-Wave Systems Inc., a Canadian company, is currently the only company to sell

ORIGINAL: Bloomberg
By Jeremy Kahn September 13, 2017

domingo, 10 de septiembre de 2017

15 Really Good Things Happening in Science Right Now

3Dme Creative Studio/Shutterstock.com
Enough bad news, here's the good stuff.

There's no shortage of bad news in the media, but sometimes we spend so much time focussing on nuclear weapons and disappearing seas that we forget there are some incredible things happening in the world of science and technology.

To provide you with some much-needed hope for the future, we've put together a list of some of the best science news of 2017 so far.

1. African wild dogs communicate with each other in the most adorable way ever: sneezes
Scientists have observed African wild dogs in Botswana sneezing at each other in order to cast their vote on whether it's time to get up and go hunting. And, yes, we have video footage:

2. Vaccines have saved the lives of almost 20 million children in poor countries since 2001

3. We're about to cross the 'quantum supremacy' limit in computing
At the 4th International Conference on Quantum Technologies held in Moscow in July, a team of American and Russian researchers announced they'd successfully tested a record-breaking 51-qubit device, taking us closer than ever before to a functioning quantum computer.

4. Scientists might have finally discovered the trigger that kicks off autoimmune diseases
Autoimmune diseases occur when the body's immune system starts to attack itself, but despite being incredibly widespread, researchers haven't been able to nail down what triggers this strange reaction in the first place.

Now, scientists have identified a chain reaction that could potentially explain why our own bodies can turn against healthy cells, potentially transforming the way we look at autoimmune diseases and the way we treat them.

5. We're finally getting close to achieving sustainable nuclear fusion
Nuclear fusion could be the key to producing almost-unlimited energy with few byproducts other than saltwater, but researchers have long struggled to create a machine that could sustainably control such a powerful reaction.

But that's changing. At the end of 2015, Germany switched on a massive nuclear fusion reactor that's since successfully been able to contain a scorching hot blob of hydrogen plasma.

Das erste Plasma!!! / The first plasma!!! (js) #W7X

They're not the only ones, either, with South Korea and China both achieving record-breaking reactions in their own fusion machines. The UK has also switched on a revolutionary type of reactor that is now sustainably generating plasma within its core.

In fact, one MIT scientist has enthusiastically predicted that thanks to all these new advances, we should be able to get fusion energy on the grid by 2030.

6. Researchers are closer than ever before to having a drug that can treat autism symptoms
A small, but promising clinical trial in the US showed this year that a 100-year-old drug called suramin can measurably improve the symptoms of autism spectrum disorder (ASD) in children.

There's a lot more work to be done, but it's the first time we've been so close to having a drug that can potentially treat ASD symptoms. 

7. Scientists have discovered that crystals can be bent
Researchers have shown that crystals - which are traditionally brittle and inflexible - can be so flexible they can be bent repeatedly and even tied up in knots, overhauling our current understanding of the structures and challenging the very definition of a crystal.

The research opens up a whole new class of materials that could revolutionise electronics and technology.


8. You no longer need to pay ridiculous amounts to access peer-reviewed science research
The scientific community is fighting back against crazy paywalls, with a new study showing that more than a quarter of all scientific papers are now available free online thanks to the Unpaywall app.

9. We're getting really close to eradicating the second disease from the planet
First, humans got rid of smallpox. Now we're on the verge of wiping out the Guinea Worm parasite, which is a living nightmare that painfully erupts from people's skin.

At the start of 2015 there were just 126 cases of Guinea Worm left on Earth, mostly thanks to an ingenious and cheap drinking straw filter that stops people from being contaminated via water. As of May this year, there were only nine recorded cases.

L. Grubb/The Carter Centre

10. Finally, schools around the US are pushing back their start times
After numerous studies showing US schools start so early that they're putting a health strain on students, schools around the country are finally beginning to take note and shift their start time from 8.00 am to 8.30 am. And it's working surprisingly well.

11. Scientists think they might be able to reverse Alzheimer's memory loss
Lost memories might not be gone forever. An enzyme that interferes with key memory-forming processes in people with Alzheimer's can now be specifically targeted thanks to the discovery of a protein that helps it do its dirty work, according to new research out of MIT.

12. You could win US$1 million by solving this chess puzzle
Generous scientists are offering a US$1 million prize to anybody who can solve a fiendishly complicated twist on a classic chess problem called the Queen's Puzzle.

The beauty of the challenge is you don't even really need to understand the rules of chess to take part, but the catch is that it's so complicated the researchers predict it could take thousands of years... still, no risk, no reward, right? 

13. We've discovered a vitamin that could reduce the incidence of birth defects and miscarriages worldwide 
In what scientists are calling "the most important discovery for pregnant women since folate", a 12-year study has revealed that women could avoid miscarriages and birth defects by simply taking vitamin B3 during pregnancy.

So far, this effect has only been demonstrated in animal studies, but the results are extremely encouraging and human trials are imminent.

14. Graphene's superconductive abilities have finally been unlocked... 
At the start of this year, researchers finally unlocked the long-rumoured superconductive power of graphene, without having to dope the material.

Since then scientists have found even better ways of turning the wonder material into a superconductor, capable of shuttling electrons with zero resistance.

15. ...And researchers have shown electrons can flow through the material like liquid
Potentially even more impressive: researchers have shown that, through a new technique, electrons can actually flow through graphene like liquid, reaching limits physicists previously thought were impossible. This 'superballistic flow' could prove to be even more effective than superconductivity.

6 SEP 2017

We Recommend

lunes, 14 de agosto de 2017

Reconocimiento de Medellín a la Investigadora Científica Vanessa Restrepo Schild

"...¡Quiero contagiarlos de una idea, que les pique, les aliente hasta que la hagan realidad. Quiero que se propongan enamorar a las mayorías, a todos los jóvenes, de la Ciencia...!". Vanessa Restrepo Schild
"Condecoramos a Vanessa Restrepo, tiene 24 años y nació en Medellín. Actualmente lidera una investigación que permitió crear la primera retina artificial que usa tejidos sintéticos biológicos. Se espera que en algunos años ayude a personas con discapacidad visual. Una de mis mayores preocupaciones son los referentes de nuestros niños y los jóvenes; que sueñen con ser los "duros" de sus barrios o que ni siquiera sueñen.
Por eso quiero que su historia – la historia de una mujer de ciencia que sobresale en una de las universidades más prestigiosas del mundo- llegue a todos los barrios y rincones de Medellín. Que Vanessa pueda convertirse en un referente.
¡Gracias y felicitaciones!" Federico Gutiérrez Zuluaga, Alcalde de Medellín

"...¡Quiero contagiarlos de una idea, que les pique, les aliente hasta que la hagan realidad. Quiero que se propongan enamorar a las mayorías, a todos los jóvenes, de la Ciencia...!" 

Vanessa Restrepo Schild

viernes, 16 de junio de 2017

Scientists Design Molecular System for Artificial Photosynthesis

System is designed to mimic key functions of the photosynthetic center in green plants to convert solar energy into chemical energy stored by hydrogen fuel

Etsuko Fujita and Gerald Manbeck of Brookhaven Lab's Chemistry Division carried out a series of experiments to understand why their molecular system with six light-absorbing centers (made of ruthenium metal ions bound to organic molecules) produced more hydrogen than the system with three such centers. This understanding is key to designing more efficient molecular complexes for converting solar energy into chemical energy—a conversion that green plants do naturally during photosynthesis.
Finding inspiration from nature
The leaves of green plants contain hundreds of pigment molecules (chlorophyll and others) that absorb light at particular wavelengths. When light of the proper wavelength strikes one of these molecules, the molecule enters an excited state. Energy from this excited state is shuttled along a chain of pigment molecules until it reaches a specific type of chlorophyll in the photosynthetic reaction center. Here, the energy is used to drive the charge-separation process required for photosynthesis to proceed. The electron “hole” left behind in the chlorophyll molecule is used for water-to-oxygen conversion. Hydrogen ions formed during the water-splitting process are eventually used for the reduction of carbon dioxide to glucose in the second stage of photosynthesis, known as the light-independent reaction.
UPTON, NY—Photosynthesis in green plants converts solar energy to stored chemical energy by transforming atmospheric carbon dioxide and water into sugar molecules that fuel plant growth. Scientists have been trying to artificially replicate this energy conversion process, with the objective of producing environmentally friendly and sustainable fuels, such as hydrogen and methanol. But mimicking key functions of the photosynthetic center, where specialized biomolecules carry out photosynthesis, has proven challenging. Artificial photosynthesis requires designing a molecular system that can absorb light, transport and separate electrical charge, and catalyze fuel-producing reactions—all complicated processes that must operate synchronously to achieve high energy-conversion efficiency.

Now, chemists from the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory and Virginia Tech have designed two photocatalysts (materials that accelerate chemical reactions upon absorbing light) that incorporate individual components specialized for light absorption, charge separation, or catalysis into a single “supramolecule.” In both molecular systems, multiple light-harvesting centers made of ruthenium (Ru) metal ions are connected to a single catalytic center made of rhodium (Rh) metal ions through a bridging molecule that promotes electron transfer from the Ru centers to the Rh catalyst, where hydrogen is produced.
Photosystems (PS) I and II are large protein complexes that contain light-absorbing pigment molecules needed for photosynthesis. PS II captures energy from sunlight to extract electrons from water molecules, splitting water into oxygen and hydrogen ions (H+) and producing chemical energy in the form of ATP. PS I uses those electrons and H+ to reduce NADP+ (an electron-carrier molecule) to NADPH. The chemical energy contained in ATP and NADPH is then used in the light-independent reaction of photosynthesis to convert carbon dioxide to sugars.
They compared the hydrogen-production performance and analyzed the physical properties of the supramolecules, as described in a paper published in the June 1 online edition of Journal of the American Chemical Society, to understand why the photocatalyst with six as opposed to three Ru light absorbers produces more hydrogen and remains stable for a longer period of time. 

Developing efficient molecular systems for hydrogen production is difficult because processes are occurring at different rates,” said lead author Gerald Manbeck, a chemist in the artificial photosynthesis group at Brookhaven Lab. “Completing the catalytic turnover of hydrogen before the separated charges—the negatively charged light-excited electron and positive “hole” left behind after the excited molecule absorbs light energy—have a chance to recombine and wastefully produce heat is one of the major challenges.”

Another complication is that two electrons are needed to produce each hydrogen molecule. For catalysis to happen, the system must be able to hold the first electron long enough for the second to show up. “By building supramolecules with multiple light absorbers that may work independently, we are increasing the probability of using each electron productively and improving the molecules’ ability to function under low light conditions,” said Manbeck.

Manbeck began making the supramolecules at Virginia Tech in 2012 with the late Karen Brewer, coauthor and his postdoctoral advisor. He discovered that the four-metal (tetrametallic) system with three Ru light-absorbing centers and one Rh catalytic center yielded only 40 molecules of hydrogen for every catalyst molecule and ceased functioning after about four hours. In comparison, the seven-metal (heptametallic) system with six Ru centers and one Rh center was more than seven times more efficient, cycling 300 times to produce hydrogen for 10 hours. This great disparity in efficiency and stability was puzzling because the supramolecules contain very similar components.
This depiction of the heptametallic system upon exposure to light shows light harvesting by the six Ru centers (red) and electron transfer to the Rh catalyst (black), where hydrogen is produced. Efficient electron transfer to Rh is essential for realizing high catalytic performance.

Manbeck joined Brookhaven in 2013 and has since carried out a series of experiments with coauthor Etsuko Fujita, leader of the artificial photosynthesis group, to understand the fundamental causes for the difference in performance.

The ability to form the charge-separated state is a partial indicator of whether a supramolecule will be a good photocatalyst, but realizing efficient charge separation requires fine-tuning the energetics of each component,” said Fujita. “To promote catalysis, the Rh catalyst must be low enough in energy to accept the electrons from the Ru light absorbers when the absorbers are exposed to light.

Through cyclic voltammetry, an electrochemical technique that shows the energy levels within a molecule, the scientists found that the Rh catalyst of the heptametallic system is slightly more electron-poor and thus more receptive to receiving electrons than its counterpart in the tetrametallic system. This result suggested that the charge transfer was favorable in the heptametallic but not the tetrametallic system.

They verified their hypothesis with a time-resolved technique called nanosecond transient absorption spectroscopy, in which a molecule is promoted to an excited state by an intense laser pulse and the decay of the excited state is measured over time. The resulting spectra revealed the presence of a Ru-to-Rh charge transfer in the heptametallic system only.

The data not only confirmed our hypothesis but also revealed that the excited-state charge separation occurs much more rapidly than we had imagined,” said Manbeck. “In fact, the charge migration happens faster than the time resolution of our instrument, and probably involves short-lived, high-energy excited states.” The researchers plan to seek a collaborator with faster instrumentation who can measure the exact rate of charge separation to help clarify the mechanism.

In a follow-up experiment, the scientists performed the transient absorption measurement under photocatalytic operating conditions, with a reagent used as the ultimate source of electrons to produce hydrogen (a scalable artificial photosynthesis of hydrogen fuel from water would require replacing the reagent with electrons released during water oxidation). The excited state generated by the laser pulse rapidly accepted an electron from the reagent. They discovered that the added electron resides on Rh in the heptametallic system only, further supporting the charge migration to Rh predicted by cyclic voltammetry.

The high photocatalytic turnover of the heptametallic system and the principles governing charge separation that were uncovered in this work encourage further studies using multiple light-harvesting units linked to single catalytic sites,” said Manbeck.

This research is supported by DOE’s Office of Science.
Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.

Related Links

ORIGIN: Brookhaven National Lab
June 2, 2017
Contact: Ariana Tantillo, (631) 344-2347, or Peter Genzer, (631) 344-3174

jueves, 15 de junio de 2017

Scientists Hack a Human Cell and Reprogram It Like a Computer

CELLS ARE BASICALLY tiny computers: They send and receive inputs and output accordingly. If you chug a Frappuccino, your blood sugar spikes, and your pancreatic cells get the message. Output: more insulin.

But cellular computing is more than just a convenient metaphor. In the last couple of decades, biologists have been working to hack the cells’ algorithm in an effort to control their processes. They’ve upended nature’s role as life’s software engineer, incrementally editing a cell’s algorithm—its DNA—over generations. In a paper published today in Nature Biotechnology, researchers programmed human cells to obey 109 different sets of logical instructions. With further development, this could lead to cells capable of responding to specific directions or environmental cues in order to fight disease or manufacture important chemicals.
Large-scale design of robust genetic circuits with multiple inputs and outputs for mammalian cells
Benjamin H Weinberg, N T Hang Pham, Leidy D Caraballo, Thomas Lozanoski, Adrien Engel, Swapnil Bhatia & Wilson W Wong
Affiliations Contributions Corresponding author

Nature Biotechnology 35, 453–462 (2017) 
Received 20 June 2016
Accepted 27 January 2017
Published online 27 March 2017

Engineered genetic circuits for mammalian cells often require extensive fine-tuning to perform as intended. We present a robust, general, scalable system, called 'Boolean logic and arithmetic through DNA excision' (BLADE), to engineer genetic circuits with multiple inputs and outputs in mammalian cells with minimal optimization. The reliability of BLADE arises from its reliance on recombinases under the control of a single promoter, which integrates circuit signals on a single transcriptional layer. We used BLADE to build 113 circuits in human embryonic kidney and Jurkat T cells and devised a quantitative, vector-proximity metric to evaluate their performance. Of 113 circuits analyzed, 109 functioned (96.5%) as intended without optimization. The circuits, which are available through Addgene, include a 3-input, two-output full adder; a 6-input, one-output Boolean logic look-up table; circuits with small-molecule-inducible control; and circuits that incorporate CRISPR–Cas9 to regulate endogenous genes. BLADE enables execution of sophisticated cellular computation in mammalian cells, with applications in cell and tissue engineering.
Their cells execute these instructions by using proteins called DNA recombinases, which cut, reshuffle, or fuse segments of DNA. These proteins recognize and target specific positions on a DNA strand—and the researchers figured out how to trigger their activity. Depending on whether the recombinase gets triggered, the cell may or may not produce the protein encoded in the DNA segment.

A cell could be programmed, for example, with a so-called NOT logic gate. This is one of the simplest logic instructions: Do NOT do something whenever you receive the trigger. This study’s authors used this function to create cells that light up on command. Biologist Wilson Wong of Boston University, who led the research, refers to these engineered cells as “genetic circuits.

Here’s how it worked: Whenever the cell did contain a specific DNA recombinase protein, it would NOT produce a blue fluorescent protein that made it light up. But when the cell did not contain the enzyme, its instruction was DO light up. The cell could also follow much more complicated instructions, like lighting up under longer sets of conditions.

Wong says that you could use these lit up cells to diagnose diseases, by triggering them with proteins associated with a particular disease. If the cells light up after you mix them with a patient’s blood sample, that means the patient has the disease. This would be much cheaper than current methods that require expensive machinery to analyze the blood sample.

Now, don’t get distracted by the shiny lights quite yet. The real point here is that the cells understand and execute directions correctly.It’s like prototyping electronics,” says biologist Kate Adamala of the University of Minnesota, who wasn’t involved in the research. As every Maker knows, the first step to building complex Arduino circuits is teaching an LED to blink on command.

Pharmaceutical companies are teaching immune cells to be better cancer scouts using similar technology. Cancer cells have biological fingerprints, such as a specific type of protein. Juno Therapeutics, a Seattle-based company, engineers immune cells that can detect these proteins and target cancer cells specifically. If you put logic gates in those immune cells, you could program the immune cells to destroy the cancer cells in a more sophisticated and controlled way.

Programmable cells have other potential applications. Many companies use genetically modified yeast cells to produce useful chemicals. Ginkgo Bioworks, a Boston-based company, uses these yeast cells to produce fragrances, which they have sold to perfume companies. This yeast eats sugar just like brewer’s yeast, but instead of producing alcohol, it burps aromatic molecules. The yeast isn’t perfect yet: Cells tend to mutate as they divide, and after many divisions, they stop working well. Narendra Maheshri, a scientist at Ginkgo, says that you could program the yeast to self-destruct when it stops functioning properly, before they spoil a batch of high-grade cologne.

Wong’s group wasn’t the first to make biological logic gates, but they’re the first to build so many with consistent success. Of the 113 circuits they built, 109 worked. “In my personal experience building genetic circuits, you’d be lucky if they worked 25 percent of the time,” Wong says. Now that they’ve gotten these basic genetic circuits to work, the next step is to make the logic gates work in different types of cells.

But it won’t be easy. Cells are incredibly complicated—and DNA doesn’t have straightforward “on” and “off” switches like an electronic circuit. In Wong’s engineered cells, you “turn off” the production of a certain protein by altering the segment of DNA that encodes its instructions. It doesn’t always work, because nature might have encoded some instructions in duplicate. In other words: It’s hard to debug 3 billion years of evolution.


lunes, 12 de junio de 2017

Researchers take major step forward in Artificial Intelligence

The long-standing dream of using Artificial Intelligence (AI) to build an artificial brain has taken a significant step forward, as a team led by Professor Newton Howard from the University of Oxford has successfully prototyped a nanoscale, AI-powered, artificial brain in the form factor of a high-bandwidth neural implant.

Professor Newton Howard (pictured above and below) holding parts of the implant device
In collaboration with INTENT LTD, Qualcomm Corporation, Intel Corporation, Georgetown University and the Brain Sciences Foundation, Professor Howard’s Oxford Computational Neuroscience Lab in the Nuffield Department of Surgical Sciences has developed the proprietary algorithms and the optoelectronics required for the device. Rodents’ testing is on target to begin very soon.

This achievement caps over a decade of research by Professor Howard at MIT’s Synthetic Intelligence Lab and the University of Oxford, work that resulted in several issued US patents on the technologies and algorithms that power the device, 
  • the Fundamental Code Unit of the Brain (FCU)
  • the Brain Code (BC) and the Biological Co-Processor (BCP) 
are the latest advanced foundations for any eventual merger between biological intelligence and human intelligence. Ni2o (pronounced “Nitoo”) is the entity that Professor Howard licensed to further develop, market and promote these technologies.
The Biological Co-Processor is unique in that it uses advanced nanotechnology, optogenetics and deep machine learning to intelligently map internal events, such as neural spiking activity, to external physiological, linguistic and behavioral expression. The implant contains over a million carbon nanotubes, each of which is 10,000 times smaller than the width of a human hair. Carbon nanotubes provide a natural, high-bandwidth interface as they conduct heat, light and electricity instantaneously updating the neural laces. They adhere to neuronal constructs and even promote neural growth. Qualcomm team leader Rudy Beraha commented, 'Although the prototype unit shown today is tethered to external power, a commercial Brain Co-Processor unit will be wireless and inductively powered, enabling it to be administered with a minimally-invasive procedures.'

The device uses a combination of methods to write to the brain, including 
  • pulsed electricity
  • light and 
  • various molecules that simulate or inhibit the activation of specific neuronal groups
These can be targeted to stimulate a desired response, such as releasing chemicals in patients suffering from a neurological disorder or imbalance. The BCP is designed as a fully integrated system to use the brain’s own internal systems and chemistries to pattern and mimic healthy brain behavior, an approach that stands in stark contrast to the current state of the art, which is to simply apply mild electrocution to problematic regions of the brain. 

Therapeutic uses
The Biological Co-Processor promises to provide relief for millions of patients suffering from neurological, psychiatric and psychological disorders as well as degenerative diseases. Initial therapeutic uses will likely be for patients with traumatic brain injuries and neurodegenerative disorders, such as Alzheimer’s, as the BCP will strengthen the weak, shortening connections responsible for lost memories and skills. Once implanted, the device provides a closed-loop, self-learning platform able to both determine and administer the perfect balance of pharmaceutical, electroceutical, genomeceutical and optoceutical therapies.

Dr Richard Wirt, a Senior Fellow at Intel Corporation and Co-Founder of INTENT, the company’s partner of Ni2o bringing BCP to market, commented on the device, saying, 'In the immediate timeframe, this device will have many benefits for researchers, as it could be used to replicate an entire brain image, synchronously mapping internal and external expressions of human response. Over the long term, the potential therapeutic benefits are unlimited.'
The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.- Professor Newton Howard
Rather than simply disrupting neural circuits, the machine learning systems within the BCP are designed to interpret these signals and intelligently read and write to the surrounding neurons. These capabilities could be used to reestablish any degenerative or trauma-induced damage and perhaps write these memories and skills to other, healthier areas of the brain. 

One day, these capabilities could also be used in healthy patients to radically augment human ability and proactively improve health. As Professor Howard points out: 'The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.' Speaking more broadly, Professor Howard sees the merging of man with machine as our inevitable destiny, claiming it to be 'the next step on the blueprint that the author of it all built into our natural architecture.'

With the resurgence of neuroscience and AI enhancing machine learning, there has been renewed interest in brain implants. This past March, Elon Musk and Bryan Johnson independently announced that they are focusing and investing in for the brain/computer interface domain. 

When asked about these new competitors, Professor Howard said he is happy to see all these new startups and established names getting into the field - he only wonders what took them so long, stating: 'I would like to see us all working together, as we have already established a mathematical foundation and software framework to solve so many of the challenges they will be facing. We could all get there faster if we could work together - after all, the patient is the priority.'

© 2017 Nuffield Department of Surgical Sciences, John Radcliffe Hospital, Headington, Oxford, OX3 9DU

2 June 2017 

domingo, 11 de junio de 2017

The Big, Hot, Expensive Problem Facing Cities Now

Cities will lose billions, and the planet will suffer–but designers could help.
[Photo: Max Ostrozhinskiy/Unsplash]
Certain climate change scenarios lend themselves to the imagination. Our brains can easily understand the risks; they’re almost filmic. Storms intensify. Cities heat up. Drought and disease explode. Coastlines are abandoned. Comparatively, financial losses can seem like an afterthought. But as economists piece together a more complex understanding of how climate change will impact the world, they’re raising the alarm.

The latest warning comes from economists from Mexico, the U.K., and the Netherlands, who show that most estimates of the cost of climate change are missing something important: the fact that global warming will be much worse in cities thanks to the urban heat island effect. Not only will cities be much hotter, they’ll pay for it, losing as much as 11% of their GDP in the most extreme cases. And overall, this “local” warming will make global warming worse. Cities need to act now to increase cool roofs, cool asphalt, and other design changes that can dampen the effect, they argue.

In the 1800s, a British scientist named Luke Howard observed that the temperature in London was consistently higher than nearby areas. Today that phenomenon is called the urban heat island effect: Asphalt, dense architecture, energy usage, and a lack of green space all conspire to make cities much warmer than areas nearby–which actually cascades to dramatically alter the weather patterns around cities in general. The effect also compounds climate change in cities, which see hotter temperatures than what the rest of the world experiences.

[Photo: Vladimir Kudinov/Unsplash]
In the journal Nature Climate Change, the economists Francisco Estrada, W.J. Wouter Botzen, and Richard S.J. Tol write that this “local” form of climate change will deeply depress the urban economy–and dramatically “amplify” global climate change overall. “Any hard-won victories over climate change on a global scale could be wiped out by the effects of uncontrolled urban heat islands,” Tol said in a University of Sussex statement. The impact is so dramatic, the economic losses from climate change are almost three times worse when the urban heat island effect is included in the model, as opposed to conventional models that don’t consider the effect.

The trio ran an analysis of the 1,692 largest cities in the world under several different future greenhouse gas concentration models, ultimately finding that the hardest-hit cities could lose almost 11% of their GDP by 2100 under the most extreme scenario, with average losses at about 5.6%. For a city like New York, which had a GDP of $1.33 trillion in 2012, an 11% loss could mean roughly $146 billion. For comparison’s sake, that’s almost double the city budget Mayor de Blasio proposed this year, or roughly what China spends on defense every year. The urban heat island effect would make any attempts to mitigate climate change on a global scale (say, through international treaties or large-scale efforts) way less effective. In short, if cities don’t start mitigating the urban heat island effect, they’ll be in big trouble economically very soon, and the rest of the world will suffer, too.

While that’s bad news for just about everyone involved, the economists point out a silver lining: Cities are more nimble and flexible to enact policy than hulking national or international governments. They modeled four different levels of policy that cities could make, and found that mitigating the urban heat island effect on a local level could have major benefits on a global scale. “And even when global efforts fail, we show that local policies can still have a positive impact, making them at least a useful insurance for bad climate outcomes on the international stage,” Tol added.

[Photo: Maxvis/iStock]
That includes green roofs and cool roofs, which reflect solar radiation with reflective paint or material, as well as cool pavements, which are made with reflective aggregate to bounce back the sun’s rays. (Expanding green spaces and increasing tree plantings are important, too, they add.)

Some cities are already enacting policy in line with their recommendations: Los Angeles made cool roofs a requirement in 2013, and just last month New York City released guidelines for resilient architecture that include cool roofs and cool pavement, as well as other heat-mitigation designs like bioswales ("...landscape elements designed to concentrate or remove silt and pollution from surface runoff water. They consist of a swaled drainage course with gently sloped sides (less than 6%) and filled with vegetation, compost and/or riprap..."). Meanwhile, many other cities are replacing parking lots with green space and parks. Architects in Phoenix are incorporating heat island-busting canopies into their designs.
Photo: Co.Design
It’s further proof that the battle for the planet will be fought in cities–and that architecture, infrastructure, and urban design will be important weapons against it. 

Kelsey Campbell-Dollaghan is Co.Design's deputy editor

ORIGINAL: FastCoDesign

sábado, 3 de junio de 2017

We Could Build an Artificial Brain Right Now

Large-scale brainlike systems are possible with existing technology—if we’re willing to spend the money

Photo: Dan Saelinger

Brain-inspired computing is having a moment. Artificial neural network algorithms like deep learning, which are very loosely based on the way the human brain operates, now allow digital computers to perform such extraordinary feats as translating language, hunting for subtle patterns in huge amounts of data, and beating the best human players at Go.

But even as engineers continue to push this mighty computing strategy, the energy efficiency of digital computing is fast approaching its limits. Our data centers and supercomputers already draw megawatts—some 2 percent of the electricity consumed in the United States goes to data centers alone. The human brain, by contrast, runs quite well on about 20 watts, which represents the power produced by just a fraction of the food a person eats each day. If we want to keep improving computing, we will need our computers to become more like our brains.

Hence the recent focus on neuromorphic technology, which promises to move computing beyond simple neural networks and toward circuits that operate more like the brain’s neurons and synapses do. The development of such physical brainlike circuitry is actually pretty far along. Work at my lab and others around the world over the past 35 years has led to artificial neural components like synapses and dendrites that respond to and produce electrical signals much like the real thing.

So, what would it take to integrate these building blocks into a brain-scale computer? 
In 2013, Bo Marr, a former graduate student of mine at Georgia Tech, and I looked at the best engineering and neuroscience knowledge of the time and concluded that it should be possible to build a silicon version of the human cerebral cortex with the transistor technology then in production. What’s more, the resulting machine would take up less than a cubic meter of space and consume less than 100 watts, not too far from the human brain.

That is not to say creating such a computer would be easy. The system we envisioned would still require a few billion dollars to design and build, including some significant packaging innovations to make it compact. There is also the question of how we would program and train the computer. Neuromorphic researchers are still struggling to understand how to make thousands of artificial neurons work together and how to translate brainlike activity into useful engineering applications.

Still, the fact that we can envision such a system means that we may not be far off from smaller-scale chips that could be used in portable and wearable electronics. These gadgets demand low power consumption, and so a highly energy-efficient neuromorphic chip—even if it takes on only a subset of computational tasks, such as signal processing—could be revolutionary. Existing capabilities, like speech recognition, could be extended to handle noisy environments. We could even imagine future smartphones conducting real-time language translation between you and the person you’re talking to. Think of it this way: In the 40 years since the first signal-processing integrated circuits, Moore’s Law has improved energy efficiency by roughly a factor of 1,000. The most brainlike neuromorphic chips could dwarf such improvements, potentially driving down power consumption by another factor of 100 million. That would bring computations that would otherwise need a data center to the palm of your hand.

The ultimate brainlike machine will be one in which we build analogues for all the essential functional components of the brain
  • the synapses, which connect neurons and allow them to receive and respond to signals; 
  • the dendrites, which combine and perform local computations on those incoming signals; and 
  • the core, or soma, region of each neuron, which integrates inputs from the dendrites and transmits its output on the axon.
Simple versions of all these basic components have already been implemented in silicon. The starting point for such work is the same metal-oxide-semiconductor field-effect transistor, or MOSFET, that is used by the billions to build the logic circuitry in modern digital processors.

These devices have a lot in common with neurons. Neurons operate using voltage-controlled barriers, and their electrical and chemical activity depends primarily on channels in which ions move between the interior and exterior of the cell—a smooth, analog process that involves a steady buildup or decline instead of a simple on-off operation.

MOSFETs are also voltage controlled and operate by the movement of individual units of charge. And when MOSFETs are operated in the “subthreshold” mode, below the voltage threshold used to digitally switch between on and off, the amount of current flowing through the device is very small—less than a thousandth of what is seen in the typical switching of digital logic gates.

The notion that subthreshold transistor physics could be used to build brainlike circuitry originated with Carver Mead of Caltech, who helped revolutionize the field of very-large-scale circuit design in the 1970s. Mead pointed out that chip designers fail to take advantage of a lot of interesting behavior—and thus information—when they use transistors only for digital logic. The process, he wrote in 1990 [PDF], essentially involves “taking all the beautiful physics that is built into...transistors, mashing it down to a 1 or 0, and then painfully building it back up with AND and OR gates to reinvent the multiply.A more “physical” or “physics-based” computer could execute more computations per unit energy than its digital counterpart. Mead predicted such a computer would take up significantly less space as well.

In the intervening years, neuromorphic engineers have made all the basic building blocks of the brain out of silicon with a great deal of biological fidelity. The neuron’s dendrite, axon, and soma components can all be fabricated from standard transistors and other circuit elements. In 2005, for example, Ethan Farquhar, then a Ph.D. candidate, and I created a neuron circuit using a set of six MOSFETs and a handful of capacitors. Our model generated electrical pulses that very closely matched those in the soma part of a squid neuron, a long-standing experimental subject. What’s more, our circuit accomplished this feat with similar current levels and energy consumption to those in the squid’s brain. If we had instead used analog circuits to model the equations neuroscientists have developed to describe that behavior, we’d need on the order of 10 times as many transistors. Performing those calculations with a digital computer would require even more space.
Illustration: James Provost Synapses and Soma: The floating-gate transistor [top left], which can store differing amounts of charge, can be used to build a “crossbar” array of artificial synapses [bottom left]. Electronic versions of other neuron components, such as the soma region [right], can be made from standard transistors and other circuit components.
Emulating synapses is a little trickier. A device that behaves like a synapse must have the ability to remember what state it is in, respond in a particular way to an incoming signal, and adapt its response over time.

There are a number of potential approaches to building synapses. The most mature one by far is the single-transistor learning synapse (STLS), a device that my colleagues and I at Caltech worked on in the 1990s while I was a graduate student studying under Mead.

We first presented the STLS in 1994, and it became an important tool for engineers who were building modern analog circuitry, such as physical neural networks. In neural networks, each node in the network has a weight associated with it, and those weights determine how data from different nodes are combined. The STLS was the first device that could hold a variety of different weights and be reprogrammed on the fly. The device is also nonvolatile, which means that it remembers its state even when not in use—a capability that significantly reduces how much energy it needs.

The STLS is a type of floating-gate transistor, a device that is used to build memory cells in flash memory. In an ordinary MOSFET, a gate controls the flow of electricity through a current-carrying channel. A floating-gate transistor has a second gate that sits between this electrical gate and the channel. This floating gate is not directly connected to ground or any other component. Thanks to that electrical isolation, which is enhanced by high-quality silicon-insulator interfaces, charges remain in the floating gate for a long time. The floating gate can take on many different amounts of charge and so have many different levels of electrical response, an essential requisite for creating an artificial synapse capable of varying its response to stimuli.

My colleagues and I used the STLS to demonstrate the first crossbar network, a computational model currently popular with nanodevice researchers. In this two-dimensional array, devices sit at the intersection of input lines running north-south and output lines running east-west. This configuration is useful because it lets you program the connection strength of each “synapse” individually, without disturbing the other elements in the array.

Thanks in part to a recent Defense Advanced Research Projects Agency program called SyNAPSE, the neuromorphic engineering field has seen a surge of research into artificial synapses built from nanodevices such as

  • memristors
  • resistive RAM, and 
  • phase-change memories (as well as floating-gate devices). 
But it will be hard for these new artificial synapses to improve on our two-decade-old floating-gate arrays. Memristors and other novel memories come with programming challenges; some have device architectures that make it difficult to target a single specific device in a crossbar array. Others need a dedicated transistor in order to be programmed, adding significantly to their footprint. Because floating-gate memory is programmable over a wide range of values, it can be more easily fine-tuned to compensate for manufacturing variation from device to device than can many nanodevices. A number of neuromorphic research groups that tried integrating nanodevices into their designs have recently come around to using floating-gate devices.

So how will we put all these brainlike components together? 
In the human brain, of course, neurons and synapses are intermingled. Neuromorphic chip designers must take a more integrated approach as well, with all neural components on the same chip, tightly mixed together. This is not the case in many neuromorphic labs today: To make research projects more manageable, different building blocks may be placed in different areas. Synapses, for example, may be relegated to an off-chip array. Connections may be routed through another chip called a field-programmable gate array, or FPGA.

But as we scale up neuromorphic systems, we’ll need to take care that we don’t replicate the arrangement in today’s computers, which lose a significant amount of energy driving bits back and forth between logic, memory, and storage. Today, a computer can easily consume 10 times the energy to move the data needed for a multiple-accumulate operation—a common signal-processing computation—as it does to perform the calculation.

The brain, by contrast, minimizes the energy cost of communication by keeping operations highly local. The memory elements of the brain, such as synaptic strengths, are mixed in with the neural components that integrate signals. And the brain’s “wires”—the dendrites and axons that extend from neurons to transmit, respectively, incoming signals and outgoing pulses—are generally fairly short relative to the size of the brain, so they don’t require large amounts of energy to sustain a signal. From anatomical data, we know that more than 90 percent of neurons connect with only their nearest 1,000 or so neighbors.

Another big question for the builders of brainlike chips and computers is the algorithms we will run on them. Even a loosely brain-inspired system can have a big advantage over digital systems. In 2004, for example, my group used floating-gate devices to perform multiplications for signal processing with just 1/1,000 the energy and 1/100 the area of a comparable digital system. In the years since, other researchers and my group have successfully demonstrated neuromorphic approaches to many other kinds of signal-processing calculations.

But the brain is still 100,000 times as efficient as the systems in these demonstrations. That’s because while our current neuromorphic technology takes advantage of the neuronlike physics of transistors, it doesn’t consider the algorithms the brain uses to perform its operations.

Today, we are just beginning to discover these physical algorithms—that is, the processes that will allow brainlike chips to operate with more brainlike efficiency. Four years ago, my research group used silicon somas, synapses, and dendrites to perform a word-spotting algorithm that identifies words in a speech waveform. This physical algorithm exhibited a thousandfold improvement in energy efficiency over predicted analog signal processing. Eventually, by lowering the amount of voltage supplied to the chips and using smaller transistors, researchers should be able to build chips that parallel the brain in efficiency for a range of computations.

When I started in neuromorphic research 30 years ago, everyone believed tremendous opportunities would arise from designing systems that are more like the brain. And indeed, entire industries are now being built around brain-inspired AI and deep learning, with applications that promise to transform—among other things—our mobile devices, our financial institutions, and how we interact in public spaces.

And yet, these applications rely only slightly on what we know about how the brain actually works. The next 30 years will undoubtedly see the incorporation of more such knowledge. We already have much of the basic hardware we need to accomplish this neuroscience-to-computing translation. But we must develop a better understanding of how that hardware should behave—and what computational schemes will yield the greatest real-world benefits.

Consider this a call to action. We have come pretty far with a very loose model of how the brain works. But neuroscience could lead to far more sophisticated brainlike computers. And what greater feat could there be than using our own brains to learn how to build new ones?

This article appears in the June 2017 print issue as “A Road Map for the Artificial Brain.”

About the Author

Jennifer Hasler is a professor of electrical and computer engineering at the Georgia Institute of Technology.

Posted 1 Jun 2017 | 19:00 GMT