The Macro Recorder in VBA Programming

The macro recorder is a good introduction into the world of VBA programming, but it’s not meant to be your only teacher. It provides a simplistic approach to coding with Excel’s object model, but is by far not a teacher of advanced or efficient programming methods. You can even pick up some bad habits if you rely on it as your only means of learning VBA. Like many other programmers, I did start off with the recorder but eventually moved to the next level.

Here are 10 things I had to learn to take my programming skill up a notch.

 

1. The Macro Recorder Is a Terrible Teacher, But You Can Learn from It.

I’m not saying to throw out the recorder and never use it again. In truth, most of the time I find it more useful then Microsoft’s help files when I need to look up an object or its properties and methods. Need the code for creating a pivot table? Then go ahead and record it so you can see the objects and steps involved. But then improve the code by using the advice below.

 

2. Declare Your Variables!

In the early days when RAM was so expensive, every byte counted. That was a major argument for declaring variables: Undeclared variables are of type variant, with a minimum size of 16 bytes, whereas if you declare a variable as type integer, you use only 2 bytes.

Now that high RAM is so common, some have thrown out the argument and don’t bother declaring variables. But then, they’ve forgotten the other reason for variable declaration, one which has saved me a lot of frustration: When you require variable declaration, Excel will point out unknown variables during compilation. And if you mix upper-and lowercase in your variable naming, you can spot mistakes right away, because Excel will keep the case the same for you as you are typing your code.

You have to manually turn on the variable declaration requirement: In the VBE, go to Tools, Options and check the box for Require Variable Declaration. Once that’s done, any new workbooks will have Option Explicit at the top of every module. For your older workbooks, you can type in Option Explicit at the top of a module, forcing variable declaration.

 

3. There’s No Need to Use Select or Activate.

Probably one of the worst actions the recorder teaches is that objects must be selected before they can be manipulated. If you provide Excel with the specific object you want to manipulate, such as a sheet name or cell address, then you don’t need to activate the sheet or select the cell. So, while the macro recorder provides this:

Deep learning system could someday serve as a social coach

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

 

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

Combines art and technology

Garrett Parrish grew up singing and dancing as a theater kid, influenced by his older siblings, one of whom is an actor and the other a stage manager. But by the time he reached high school, Parrish had branched out significantly, drumming in his school’s jazz ensemble and helping to build a state-championship-winning robot.

MIT was the first place Parrish felt he was able to work meaningfully at the nexus of art and technology. “Being a part of the MIT culture, and having the resources that are available here, are what really what opened my mind to that intersection,” the MIT senior says. “That’s always been my goal from the beginning: to be as emotionally educated as I am technically educated.”

Parrish, who is majoring in mechanical engineering, has collaborated on a dizzying array of projects ranging from app-building, to assistant directing, to collaborating on a robotic opera. Driving his work is an interest in shaping technology to serve others.

“The whole goal of my life is to fix all the people problems. I sincerely think that the biggest problems we have are how we deal with each other, and how we treat each other. [We need to be] promoting empathy and understanding, and technology is an enormous power to influence that in a good way,” he says.

Technology for doing good

Parrish began his academic career at Harvard University and transferred to MIT after his first year. Frustrated at how little power individuals often have in society, Parrish joined DoneGood co-founders Scott Jacobsen and Cullen Schwartz, and became the startup’s chief technology officer his sophomore year. “We kind of distilled our frustrations about the way things are into, ‘How do you actionably use people’s existing power to create real change?’” Parrish says.

The DoneGood app and Chrome extension help consumers find businesses that share their priorities and values, such as paying a living wage, or using organic ingredients. The extension monitors a user’s online shopping and recommends alternatives. The mobile app offers a directory of local options and national brands that users can filter according to their values. “The two things that everyday people have at their disposal to create change is how they spend their time and how they spend their money. We direct money away from brands that aren’t sustainable, therefore creating an actionable incentive for them to become more sustainable,” Parrish says.

DoneGood has raised its first round of funding, and became a finalist in the MIT $100K Entrepreneurship Competition last May. The company now has five full-time employees, and Parrish continues to work as CTO part-time. “It’s been a really amazing experience to be in such an important leadership role. And to take something from the ground up, and really figure out what is the best way to actually create the change you want,” Parrish says. “Where technology meets cultural influence is very interesting, and it’s a space that requires a lot of responsibility and perspective.”

Engineer natural and human made networks

Fadel Adib SM ’13, PhD ’16 has been appointed an assistant professor in the Program in Media Arts and Sciences at the MIT Media Lab, where he leads the new Signal Kinetics research group. His group’s mission is to explore and develop new technologies that can extend human and computer abilities in communication, sensing, and actuation.

Adib comes to the lab from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), where he received his PhD and master’s degrees in electrical engineering and computer science, supervised by MIT professor of electrical engineering and computer science Dina Katabi. Adib’s doctoral thesis, “Wireless Systems that Extend Our Senses,” demonstrates that wireless signals can be used as sensing tools to learn about the environment, thus enabling us to see through walls, track human gestures, and monitor human vital signs from a distance. His master’s thesis, “See Through Walls with Wifi,” won the best master’s thesis award in computer science at MIT in 2013. He earned his bachelor’s degree in computer and communications engineering from the American University of Beirut, in Lebanon, the country of his birth, where he graduated with the highest GPA in the university’s digitally-recorded history.

“We can get your locations, we can get your gestures, we can get your breathing,” Adib said at a Media Lab event in October 2016. “And we can even get your heart rate—all without putting any sensor on your body. This is exactly what our research is about.” Signal Kinetics researchers tap into the invisible signals that surround us — from WiFi to brain waves. The aim is to uncover, analyze, and engineer these natural and human-made networks, drawing on tools from computer networks, signal processing, machine learning, and hardware design.

“We are living in a sea of radio waves,” Adib told the Media Lab audience. “As our bodies move, we modulate these radio waves, similar to how you create waves when you move around in a pool of water. While we cannot see these with our naked eye, we can extract them and we can build intelligence in the environment to enable a large number of applications and extend our senses using wireless technology.” The technology is applicable to a broad range of needs: from monitoring an infant’s breathing or an elderly person who has fallen, to determining whether someone has sleep apnea, to detecting survivors in a burning building. The group’s research also has potential applications for gaming and filmmaking.

In 2015, Forbes magazine selected Adib among the 30 Under 30 Who Are Moving the World in Enterprise Technology. In 2014, MIT Technology Review chose him as one of the world’s 35 top innovators under the age of 35. His research has been identified as one of the 50 ways MIT has transformed computer science over the past 50 years.

“Fadel’s work in wireless sensing is groundbreaking and opens up all sorts of new opportunities,” says the Media Lab’s Pattie Maes, the Alex W. Dreyfoos Professor of Media Technology and academic head of the Program in Media Arts and Sciences. “I can’t wait to see what impact his presence in the lab will have on many of the research topics that we focus on, including Smart Cities, Responsive Environments, Extreme Bionics, Extended Intelligence, Tools for Health and Wellbeing, and more.”

Preserving their fundamental mathematical relationships

One way to handle big data is to shrink it. If you can identify a small subset of your data set that preserves its salient mathematical relationships, you may be able to perform useful analyses on it that would be prohibitively time consuming on the full set.

The methods for creating such “coresets” vary according to application, however. Last week, at the Annual Conference on Neural Information Processing Systems, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the University of Haifa in Israel presented a new coreset-generation technique that’s tailored to a whole family of data analysis tools with applications in natural-language processing, computer vision, signal processing, recommendation systems, weather prediction, finance, and neuroscience, among many others.

“These are all very general algorithms that are used in so many applications,” says Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and senior author on the new paper. “They’re fundamental to so many problems. By figuring out the coreset for a huge matrix for one of these tools, you can enable computations that at the moment are simply not possible.”

As an example, in their paper the researchers apply their technique to a matrix — that is, a table — that maps every article on the English version of Wikipedia against every word that appears on the site. That’s 1.4 million articles, or matrix rows, and 4.4 million words, or matrix columns.

That matrix would be much too large to analyze using low-rank approximation, an algorithm that can deduce the topics of free-form texts. But with their coreset, the researchers were able to use low-rank approximation to extract clusters of words that denote the 100 most common topics on Wikipedia. The cluster that contains “dress,” “brides,” “bridesmaids,” and “wedding,” for instance, appears to denote the topic of weddings; the cluster that contains “gun,” “fired,” “jammed,” “pistol,” and “shootings” appears to designate the topic of shootings.

Joining Rus on the paper are Mikhail Volkov, an MIT postdoc in electrical engineering and computer science, and Dan Feldman, director of the University of Haifa’s Robotics and Big Data Lab and a former postdoc in Rus’s group.

The researchers’ new coreset technique is useful for a range of tools with names like singular-value decomposition, principal-component analysis, and latent semantic analysis. But what they all have in common is dimension reduction: They take data sets with large numbers of variables and find approximations of them with far fewer variables.

The happened at the Computer Science

Machines that predict the future, robots that patch wounds, and wireless emotion-detectors are just a few of the exciting projects that came out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) this year. Here’s a sampling of 16 highlights from 2016 that span the many computer science disciplines that make up CSAIL.

Robots for exploring Mars — and your stomach

  • A team led by CSAIL director Daniela Rus developed an ingestible origami robot that unfolds in the stomach to patch wounds and remove swallowed batteries.
  • Researchers are working on NASA’s humanoid robot, “Valkyrie,” who will be programmed for trips into outer space and to autonomously perform tasks.
  • A 3-D printed robot was made of both solids and liquids and printed in one single step, with no assembly required.

Keeping data safe and secure

  • CSAIL hosted a cyber summit that convened members of academia, industry, and government, including featured speakers Admiral Michael Rogers, director of the National Security Agency; and Andrew McCabe, deputy director of the Federal Bureau of Investigation.
  • Researchers came up with a system for staying anonymous online that uses less bandwidth to transfer large files between anonymous users.
  • A deep-learning system called AI2 was shown to be able to predict 85 percent of cyberattacks with the help of some human input.

Advancements in computer vision

  • A new imaging technique called Interactive Dynamic Video lets you reach in and “touch” objects in videos using a normal camera.
  • Researchers from CSAIL and Israel’s Weizmann Institute of Science produced a movie display called Cinema 3D that uses special lenses and mirrors to allow viewers to watch 3-D movies in a theater without having to wear those clunky 3-D glasses.
  • A new deep-learning algorithm can predict human interactions more accurately than ever before, by training itself on footage from TV shows like “Desperate Housewives” and “The Office.”
  • A group from MIT and Harvard University developed an algorithm that may help astronomers produce the first image of a black hole, stitching together telescope data to essentially turn the planet into one large telescope dish.

Tech to help with health

  • A team produced a robot that can help schedule and assign tasks by learning from humans, in fields like medicine and the military.
  • Researchers came up with an algorithm for identifying organs in fetal MRI scans to extensively evaluate prenatal health.
  • A wireless device called EQ-Radio can tell if you’re excited, happy, angry, or sad, by measuring breathing and heart rhythms.

Websites with fewer bugs

Today, loading a web page on a big website usually involves a database query — to retrieve the latest contributions to a discussion you’re participating in, a list of news stories related to the one you’re reading, links targeted to your geographic location, or the like.

But database queries are time consuming, so many websites store — or “cache” — the results of common queries on web servers for faster delivery.

If a site user changes a value in the database, however, the cache needs to be updated, too. The complex task of analyzing a website’s code to identify which operations necessitate updates to which cached values generally falls to the web programmer. Missing one such operation can result in an unusable site.

This week, at the Association for Computing Machinery’s Symposium on Principles of Programming Languages, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory presented a new system that automatically handles caching of database queries for web applications written in the web-programming language Ur/Web.

Although a website may be fielding many requests in parallel — sending different users different cached data, or even data cached on different servers — the system guarantees that, to the user, every transaction will look exactly as it would if requests were handled in sequence. So a user won’t, for instance, click on a link showing that tickets to an event are available, only to find that they’ve been snatched up when it comes time to pay.

In experiments involving two websites that had been built using Ur/Web, the new system’s automatic caching offered twofold and 30-fold speedups.

“Most very popular websites backed by databases don’t actually ask the database over and over again for each request,” says Adam Chlipala, an associate professor of electrical engineering and computer science at MIT and senior author on the conference paper. “They notice that, ‘Oh, I seem to have asked this question quite recently, and I saved the result, so I’ll just pull that out of memory.’”

“But the tricky part here is that you have to realize when you make changes to the database that some of your saved answers are no longer necessarily correct, and you have to do what’s called ‘invalidating’ them. And in the mainstream way of implementing this, the programmer needs to manually add invalidation logic. For every line of code that changes the database, the programmer has to sit down and think, ‘Okay, for every other line of code that reads the database and saves the result in a cache, which ones of those are going to be broken by the change I just made?’”

Explore Sway navigation

Specific topics in this chapter include the following:

  • Creating a Sway account
  • Finding your way around Sway
  • Creating a new Sway
  • Signing in and out of Sway

Getting started with Sway is easy—sign up using your Microsoft account and begin designing. You can create a Sway from scratch or convert a Word document, PowerPoint presentation, or PDF to Sway. If you’re not sure where to begin, view sample Sways to discover how they were designed and use these for inspiration or as a template for your own Sways.

 

Creating a Sway Account

Creating an account on Sway is a simple, straightforward process. All you need is a Microsoft account and access to the Internet through your computer or iOS mobile device.

Navigate to https://sway.com in your browser, and then click the Get Started button.

  1. Enter the email address of the Microsoft account you want to use with Sway.
  2. Click the Next button.
  3. Enter your password.
  4. If you want to remain signed in, select the Keep Me Signed In check box.

Popular compiler yields more efficient parallel programs

Compilers are programs that convert computer code written in high-level languages intelligible to humans into low-level instructions executable by machines.

But there’s more than one way to implement a given computation, and modern compilers extensively analyze the code they process, trying to deduce the implementations that will maximize the efficiency of the resulting software.

Code explicitly written to take advantage of parallel computing, however, usually loses the benefit of compilers’ optimization strategies. That’s because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren’t sure how to interpret the new code, so they don’t try to improve its performance.

At the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel Programming next week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will present a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution.

As a consequence, says Charles E. Leiserson, the Edwin Sibley Webster Professor in Electrical Engineering and Computer Science at MIT and a coauthor on the new paper, the compiler “now optimizes parallel code better than any commercial or open-source compiler, and it also compiles where some of these other compilers don’t.”

That improvement comes purely from optimization strategies that were already part of the compiler the researchers modified, which was designed to compile conventional, serial programs. The researchers’ approach should also make it much more straightforward to add optimizations specifically tailored to parallel programs. And that will be crucial as computer chips add more and more “cores,” or parallel processing units, in the years ahead.

The idea of optimizing before adding the extra code required by parallel processing has been around for decades. But “compiler developers were skeptical that this could be done,” Leiserson says.

“Everybody said it was going to be too hard, that you’d have to change the whole compiler. And these guys,” he says, referring to Tao B. Schardl, a postdoc in Leiserson’s group, and William S. Moses, an undergraduate double major in electrical engineering and computer science and physics, “basically showed that conventional wisdom to be flat-out wrong. The big surprise was that this didn’t require rewriting the 80-plus compiler passes that do either analysis or optimization. T.B. and Billy did it by modifying 6,000 lines of a 4-million-line code base.”

Skilled human planners improves automatic planners

Every other year, the International Conference on Automated Planning and Scheduling hosts a competition in which computer systems designed by conference participants try to find the best solution to a planning problem, such as scheduling flights or coordinating tasks for teams of autonomous satellites.

On all but the most straightforward problems, however, even the best planning algorithms still aren’t as effective as human beings with a particular aptitude for problem-solving — such as MIT students.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory are trying to improve automated planners by giving them the benefit of human intuition. By encoding the strategies of high-performing human planners in a machine-readable form, they were able to improve the performance of competition-winning planning algorithms by 10 to 15 percent on a challenging set of problems.

The researchers are presenting their results this week at the Association for the Advancement of Artificial Intelligence’s annual conference.

“In the lab, in other investigations, we’ve seen that for things like planning and scheduling and optimization, there’s usually a small set of people who are truly outstanding at it,” says Julie Shah, an assistant professor of aeronautics and astronautics at MIT. “Can we take the insights and the high-level strategies from the few people who are truly excellent at it and allow a machine to make use of that to be better at problem-solving than the vast majority of the population?”

The first author on the conference paper is Joseph Kim, a graduate student in aeronautics and astronautics. He’s joined by Shah and Christopher Banks, an undergraduate at Norfolk State University who was a research intern in Shah’s lab in the summer of 2016.