MIT develops programming language for multicore image processing

MIT develops programming language for multicore image processing


Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a programming language called Halide for the implementation of multicore image processing algorithms.

Not only are Halide programs easier to read, write and revise than image-processing programs written in a conventional language, but because Halide automates code-optimization procedures that would ordinarily take hours to perform by hand, they're also significantly faster, says the team.

In tests, the MIT researchers used Halide to rewrite several common image-processing algorithms whose performance had already been optimized by seasoned programmers. The Halide versions were typically about one-third as long but offered significant performance gains; two-, three-, or even six-times speedups. In one instance, the Halide program was actually longer than the original but the speedup was 70-fold.

However, the development is currently separate to the OpenCL multicore programming specification.

Jonathan Ragan-Kelley, a graduate student in the Department of Electrical Engineering and Computer Science (EECS), and Andrew Adams, a CSAIL postdoc, led the development of Halide, and they've released the code online.

Halide doesn't spare the programmer from thinking about how to parallelize efficiently on particular machines, but it splits that problem off from the description of the image-processing algorithms. A Halide program has two sections: one for the algorithms, and one for the processing "schedule." The schedule can specify the size and shape of the image chunks that each core needs to process at each step in the pipeline, and it can specify data dependencies — for instance, that steps being executed on particular cores will need access to the results of previous steps on different cores. Once the schedule is drawn up, however, Halide handles all the accounting automatically.

A programmer who wants to export a program to a different machine just changes the schedule, not the algorithm description. A programmer who wants to add a new processing step to the pipeline just plugs in a description of the new procedure, without having to modify the existing ones. (A new step in the pipeline will require a corresponding specification in the schedule, however.)

"When you have the idea that you might want to parallelize something a certain way or use stages a certain way, when writing that manually, it's really hard to express that idea correctly," said Ragan-Kelley. "If you have a new optimization idea that you want to apply, chances are you're going to spend three days debugging it because you've broken it in the process. With this, you change one line that expresses that idea, and it synthesizes the correct thing."

Although Halide programs are simpler to write and to read than ordinary image-processing programs, because the scheduling is handled automatically, they still frequently offer performance gains over even the most carefully hand-engineered code. Moreover, Halide code is so easy to modify that programmers could simply experiment with half-baked ideas to see if they improve performance.

"You can just flail around and try different things at random, and you'll often find something really good," said Adams. "Only much later, when you've thought about it very hard, will you figure out why it's good."

This article was first posted on our sister site EE Times Europe.


Related links and articles:
 
www.mit.edu

News articles:


Mali cores support Khronos compression

ARM supports European research on GPU programming

Altera's OpenCL for FPGAs dramatically reduces development times

Previous
Next    Inside TI's ADS1298 analog front end for health monitoring