X

Intel's James Reinders on parallelism - Part 2

In Part 2 of my discussion with Intel's Director of Marketing and Business for the company's Software Development Products, we move on to cloud computing, functional and dynamic languages, and what needs to happen with computer science education.

Gordon Haff
Gordon Haff is Red Hat's cloud evangelist although the opinions expressed here are strictly his own. He's focused on enterprise IT, especially cloud computing. However, Gordon writes about a wide range of topics whether they relate to the way too many hours he spends traveling or his longtime interest in photography.
Gordon Haff
6 min read

Intel's James Reinders is an expert on parallelism; his most recent book covered the C++ extensions for parallelism provided by Intel Threaded Building Blocks. He's also the Director of Marketing and Business for the company's Software Development Products. In Part 1 of our discussion at the Intel Developers Forum in September we talked about how to think about performance in a parallel programming environment, why such environments give developers headaches, and what can be done about it.

Here, in Part 2, we move on to cloud computing, functional and dynamic languages, and what needs to happen with computer science education.

Few wide-ranging conversations these days would be complete without at least a nod to cloud computing which Reinders views as very much connected to the matter of parallel programming.

Cloud computing is parallel programming. You're solving the same problem. In fact, someone that's good at decomposing a program to run in parallel on a multicore or on a supercomputer... the same thought process is necessary to decompose a problem in cloud computing. What's different in cloud computing is that the cost of a connection or a communication between two different clouds is so high. You really need to get it right. It works best when a little message is sent, does an enormous amount of computing, and gets a little message back.

Data parallelism tends to be very fine-grained.

Task parallelism like we see with Cilk and Threaded Building Blocks is a little bit more coarse.

Cloud computing has to be very very coarse-grained parallelism.

But there's something common about how you have to think about it.

The tools that will let people do cloud computing, express a problem in cloud computing, may eventually just map onto a multicore.

The granularity that Reinders discusses refers to how small a chunk of computing can be, given the cost and latency of communications. Within a single processor, communications bandwidth is high and latencies low, so software can afford to perform a relatively small task and then synchronize the results. (Although moving large amounts of data can still be relatively "expensive" which is why data parallelism can be finer-grained than task parallelism; see Part 1 for further background on data parallelism.)

By contrast, external communication networks have limited bandwidth and are relatively slow--on the order of four or five orders of magnitude slower than communications within a system. Therefore, tasks have to be parceled out in relatively large chunks that, ideally, don't have to be packaged up with a significant amount of local data.

Next up was education. Here, Reinders' basic message was focusing on the theory before diving into the implementation details. I suspect that this highlights one of the key challenges: Parallel programming tends to require a solid grasp of programming theory and doesn't lend itself particularly well to just "hacking around" in the absence of that grounding.

I've been doing a lot in the area of teaching parallelism. What a lot of people think of right away is teach them locks, teach them mutexes [algorithms to prevent the simultaneous use of a common resource], teach about how to create a thread, destroy a thread. That's all wrong. You want to be talking at a higher level. How do you decompose an algorithm? What is synchronization in general? Why does it exist?

Things I would hope undergraduates would learn are parsing theory, DAG representations [a tool used to represent common subexpressions in an optimizing compiler], database schemas, data structures, algorithms. All these are high level, not things like [the programming language] Java. Parallel programming's like that too. You get hands-on touching the synchronization method or whatever but you want to teach the higher level key concepts.

Some people it's going to be more in-tune with their thinking but you try and teach it to everyone.

Given that most of today's languages weren't expressly designed for parallel programming, discussions about parallelism often turn to new programming languages. This means functional languages most of all but can also involve dynamic or scripting languages which generally handle more low-level details under the covers than do Java or C++.

Functional languages don't lend themselves to easy, or easily comprehensible, description. A common shorthand is that "Functional programming is a style of programming that emphasizes the evaluation of expressions, rather than execution of commands." But that probably doesn't help much if you don't already know what it is. As for Wikipedia's entry, Tim Bray--no programming slouch--called it fairly impenetrable. (Perhaps you begin to see the problem.)

A couple of things I'm interested in functionals for. We don't wake up one day and everyone uses. It's sequential semantics again and sequential semantics appeal to people and functional languages don't have them. But some people eat them up.

And they solve amazing problems. You can code things up in them that are much easier to understand than if they are written in a traditional language although they can be cryptic or terse to a lot of programmers.

Erlang [a functional language] has gotten a bit more and more usage. Maybe it is creeping in. It's not going to take over the world overnight but it seems like the one that might stay around. May be talking about it 20 years from now and saying, yeah, Erlang's been around for 25 years. It might be accepted as a language. It may have legs.

But even Java. [Unlike Erlang,] It appealed to people who programmed in C and C++; it didn't challenge them to think differently. And because of the strict typing and stuff it helps [the enterprise developer] to deploy certain types of apps.

Python [a dynamic language] is interesting. It is so popular with a lot of scientists. It's on my short list of things, where if we can figure out where to partner or extend some of the things we're doing, Python's on my short list of languages that we want to help with parallelism. Maybe some of our Ct technology would apply there. We'll see if other people agree with us. Think the concepts we're talking about are pretty portable. 

Finally, we concluded our discussion with hardware.Are there opportunities at the hardware and firmware level with memory subsystems or with specific technologies such as transactional memory? Sun Microsystems was very interested in transactional memory in the context of its now canceled "Rock" microprocessor. The basic concept behind transactional memory is to provide an alternative to lock-based synchronization by handling concurrency problems as they occur at a low-level rather than having the programmer protect against them all the time.

The best solutions tend to not be silver bullets so much as incremental. Nehalem [Intel's latest microprocessor generation] in a way probably helped us more than  anything in recent memory because we moved to the QuikPath interconnect and moved bandwidths up and latencies down. Larrabee [a many-core Intel microprocessor still under development] may pave the way with some innovations in interconnects. I think there may be some refinements needed. Interconnecting the processors is a classic supercomputer issue.

Transactional memory has slammed up against a very tough reality which is that hardware always wants to be finite; software solutions wants to be infinite. Think there's something there.I think the people looking at transactional memory have started to make observations about locks that may end up being useful. It's funny. The mission of transactional memory is to get rid of locks but the more they looked at it the more they understood about how locks behave. There might actually be possibilities to make locks behave better in hardware.

Can we do the hardware a little differently? Not the sexiest thing in the world. But as we move from single-threaded to  multi-threaded what complications are we creating things [that the hardware can help with]?

Even if you don't subscribe to the more extreme views of programming and software being in a crisis because of the move to multi-core, we're clearly in a transition. New tools are needed and programmers will have to adapt as well, to at least some degree.