Monthly Archives: October 2016

Light on purpose of inhibitory

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons — neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a “winner-take-all” operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers will present their results this week at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She’s joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch’s group has studied communication and resource allocation in ad hoc networks — networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch says. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”

Artificial neurology

In recent years, artificial neural networks — computer models roughly based on the structure of the brain — have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of “nodes” that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion — for instance, if it exceeds a particular value — the node “fires,” or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated “weight,” which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

Easy querying and filtering

The age of big data has seen a host of new techniques for analyzing large data sets. But before any of those techniques can be applied, the target data has to be aggregated, organized, and cleaned up.

That turns out to be a shockingly time-consuming task. In a 2016 survey, 80 data scientists told the company CrowdFlower that, on average, they spent 80 percent of their time collecting and organizing data and only 20 percent analyzing it.

An international team of computer scientists hopes to change that, with a new system called Data Civilizer, which automatically finds connections among many different data tables and allows users to perform database-style queries across all of them. The results of the queries can then be saved as new, orderly data sets that may draw information from dozens or even thousands of different tables.

“Modern organizations have many thousands of data sets spread across files, spreadsheets, databases, data lakes, and other software systems,” says Sam Madden, an MIT professor of electrical engineering and computer science and faculty director of MIT’s bigdata@CSAIL initiative. “Civilizer helps analysts in these organizations quickly find data sets that contain information that is relevant to them and, more importantly, combine related data sets together to create new, unified data sets that consolidate data of interest for some analysis.”

The researchers presented their system last week at the Conference on Innovative Data Systems Research. The lead authors on the paper are Dong Deng and Raul Castro Fernandez, both postdocs at MIT’s Computer Science and Artificial Intelligence Laboratory; Madden is one of the senior authors. They’re joined by six other researchers from Technical University of Berlin, Nanyang Technological University, the University of Waterloo, and the Qatar Computing Research Institute. Although he’s not a co-author, MIT adjunct professor of electrical engineering and computer science Michael Stonebraker, who in 2014 won the Turing Award — the highest honor in computer science — contributed to the work as well.

Pairs and permutations

Data Civilizer assumes that the data it’s consolidating is arranged in tables. As Madden explains, in the database community, there’s a sizable literature on automatically converting data to tabular form, so that wasn’t the focus of the new research. Similarly, while the prototype of the system can extract tabular data from several different types of files, getting it to work with every conceivable spreadsheet or database program was not the researchers’ immediate priority. “That part is engineering,” Madden says.

The system begins by analyzing every column of every table at its disposal. First, it produces a statistical summary of the data in each column. For numerical data, that might include a distribution of the frequency with which different values occur; the range of values; and the “cardinality” of the values, or the number of different values the column contains. For textual data, a summary would include a list of the most frequently occurring words in the column and the number of different words. Data Civilizer also keeps a master index of every word occurring in every table and the tables that contain it.

The Macro Recorder in VBA Programming

The macro recorder is a good introduction into the world of VBA programming, but it’s not meant to be your only teacher. It provides a simplistic approach to coding with Excel’s object model, but is by far not a teacher of advanced or efficient programming methods. You can even pick up some bad habits if you rely on it as your only means of learning VBA. Like many other programmers, I did start off with the recorder but eventually moved to the next level.

Here are 10 things I had to learn to take my programming skill up a notch.

 

1. The Macro Recorder Is a Terrible Teacher, But You Can Learn from It.

I’m not saying to throw out the recorder and never use it again. In truth, most of the time I find it more useful then Microsoft’s help files when I need to look up an object or its properties and methods. Need the code for creating a pivot table? Then go ahead and record it so you can see the objects and steps involved. But then improve the code by using the advice below.

 

2. Declare Your Variables!

In the early days when RAM was so expensive, every byte counted. That was a major argument for declaring variables: Undeclared variables are of type variant, with a minimum size of 16 bytes, whereas if you declare a variable as type integer, you use only 2 bytes.

Now that high RAM is so common, some have thrown out the argument and don’t bother declaring variables. But then, they’ve forgotten the other reason for variable declaration, one which has saved me a lot of frustration: When you require variable declaration, Excel will point out unknown variables during compilation. And if you mix upper-and lowercase in your variable naming, you can spot mistakes right away, because Excel will keep the case the same for you as you are typing your code.

You have to manually turn on the variable declaration requirement: In the VBE, go to Tools, Options and check the box for Require Variable Declaration. Once that’s done, any new workbooks will have Option Explicit at the top of every module. For your older workbooks, you can type in Option Explicit at the top of a module, forcing variable declaration.

 

3. There’s No Need to Use Select or Activate.

Probably one of the worst actions the recorder teaches is that objects must be selected before they can be manipulated. If you provide Excel with the specific object you want to manipulate, such as a sheet name or cell address, then you don’t need to activate the sheet or select the cell. So, while the macro recorder provides this:

Deep learning system could someday serve as a social coach

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

 

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

Combines art and technology

Garrett Parrish grew up singing and dancing as a theater kid, influenced by his older siblings, one of whom is an actor and the other a stage manager. But by the time he reached high school, Parrish had branched out significantly, drumming in his school’s jazz ensemble and helping to build a state-championship-winning robot.

MIT was the first place Parrish felt he was able to work meaningfully at the nexus of art and technology. “Being a part of the MIT culture, and having the resources that are available here, are what really what opened my mind to that intersection,” the MIT senior says. “That’s always been my goal from the beginning: to be as emotionally educated as I am technically educated.”

Parrish, who is majoring in mechanical engineering, has collaborated on a dizzying array of projects ranging from app-building, to assistant directing, to collaborating on a robotic opera. Driving his work is an interest in shaping technology to serve others.

“The whole goal of my life is to fix all the people problems. I sincerely think that the biggest problems we have are how we deal with each other, and how we treat each other. [We need to be] promoting empathy and understanding, and technology is an enormous power to influence that in a good way,” he says.

Technology for doing good

Parrish began his academic career at Harvard University and transferred to MIT after his first year. Frustrated at how little power individuals often have in society, Parrish joined DoneGood co-founders Scott Jacobsen and Cullen Schwartz, and became the startup’s chief technology officer his sophomore year. “We kind of distilled our frustrations about the way things are into, ‘How do you actionably use people’s existing power to create real change?’” Parrish says.

The DoneGood app and Chrome extension help consumers find businesses that share their priorities and values, such as paying a living wage, or using organic ingredients. The extension monitors a user’s online shopping and recommends alternatives. The mobile app offers a directory of local options and national brands that users can filter according to their values. “The two things that everyday people have at their disposal to create change is how they spend their time and how they spend their money. We direct money away from brands that aren’t sustainable, therefore creating an actionable incentive for them to become more sustainable,” Parrish says.

DoneGood has raised its first round of funding, and became a finalist in the MIT $100K Entrepreneurship Competition last May. The company now has five full-time employees, and Parrish continues to work as CTO part-time. “It’s been a really amazing experience to be in such an important leadership role. And to take something from the ground up, and really figure out what is the best way to actually create the change you want,” Parrish says. “Where technology meets cultural influence is very interesting, and it’s a space that requires a lot of responsibility and perspective.”

Engineer natural and human made networks

Fadel Adib SM ’13, PhD ’16 has been appointed an assistant professor in the Program in Media Arts and Sciences at the MIT Media Lab, where he leads the new Signal Kinetics research group. His group’s mission is to explore and develop new technologies that can extend human and computer abilities in communication, sensing, and actuation.

Adib comes to the lab from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), where he received his PhD and master’s degrees in electrical engineering and computer science, supervised by MIT professor of electrical engineering and computer science Dina Katabi. Adib’s doctoral thesis, “Wireless Systems that Extend Our Senses,” demonstrates that wireless signals can be used as sensing tools to learn about the environment, thus enabling us to see through walls, track human gestures, and monitor human vital signs from a distance. His master’s thesis, “See Through Walls with Wifi,” won the best master’s thesis award in computer science at MIT in 2013. He earned his bachelor’s degree in computer and communications engineering from the American University of Beirut, in Lebanon, the country of his birth, where he graduated with the highest GPA in the university’s digitally-recorded history.

“We can get your locations, we can get your gestures, we can get your breathing,” Adib said at a Media Lab event in October 2016. “And we can even get your heart rate—all without putting any sensor on your body. This is exactly what our research is about.” Signal Kinetics researchers tap into the invisible signals that surround us — from WiFi to brain waves. The aim is to uncover, analyze, and engineer these natural and human-made networks, drawing on tools from computer networks, signal processing, machine learning, and hardware design.

“We are living in a sea of radio waves,” Adib told the Media Lab audience. “As our bodies move, we modulate these radio waves, similar to how you create waves when you move around in a pool of water. While we cannot see these with our naked eye, we can extract them and we can build intelligence in the environment to enable a large number of applications and extend our senses using wireless technology.” The technology is applicable to a broad range of needs: from monitoring an infant’s breathing or an elderly person who has fallen, to determining whether someone has sleep apnea, to detecting survivors in a burning building. The group’s research also has potential applications for gaming and filmmaking.

In 2015, Forbes magazine selected Adib among the 30 Under 30 Who Are Moving the World in Enterprise Technology. In 2014, MIT Technology Review chose him as one of the world’s 35 top innovators under the age of 35. His research has been identified as one of the 50 ways MIT has transformed computer science over the past 50 years.

“Fadel’s work in wireless sensing is groundbreaking and opens up all sorts of new opportunities,” says the Media Lab’s Pattie Maes, the Alex W. Dreyfoos Professor of Media Technology and academic head of the Program in Media Arts and Sciences. “I can’t wait to see what impact his presence in the lab will have on many of the research topics that we focus on, including Smart Cities, Responsive Environments, Extreme Bionics, Extended Intelligence, Tools for Health and Wellbeing, and more.”