These are notes on the panel on Grand Challenges in Programming Languages, which was held at POPL 2009. The notes were taken by Benjamin C. Pierce and edited by Yitzhak Mandelbaum
PANELISTS:
Simon Peyton Jones
Xaveir Leroy
Arvind, standing in for Martin Rinard
Kathryn McKinley
Greg Morrisett
MODERATOR: Andrew Appel
Welcome: Andrew Appel
Actually grand challenges in CS to which PL is the solution...
Simon Peyton Jones
Effects.
Our mainstream programming paradigms are all about programming with effects. A good deal of what POPL is about is looking for ways of dealing with them. Dealing with the evils of shared state, etc.
Programs that use effects locally, selectively, or minimally are easier to understand, verify, maintain, run on multicore machines, etc., etc.
Two responses: (1) That's the way things are. There's too much legacy code. Deal with the complexity of effectful programs. (The MS answer.) (2) Stop fixing the sins of yesterday, take what we now know, and start designing the languages of the future, in which perhaps effects will be restricted or limited.
E.g. F#, which reduces the need.
E.g., Haskell's story with monads.
E.g. Erlang as another intermediate point. Message passing instead of
shared-variable concurrency. Solve the problem by construction.
That's my challenge to you for the next decade.
Xavier Leroy
A personal anecdote. When he joined inria, there was general agreement among management that PL research was dead (C++ had won). The next year, Java came out and took the world by storm. The area still seems healthy.
Lesson: Don't make predictions.
Instead, let's just talk about a couple of exciting topics.
One: Program verification. 20 years ago, Hoare Logic and Abstract Interpretation were abstract constructions, useful for 10-line pascal programs. Now we have tools combining static analysis, theorem proving, automatic verification, that are applied to real systems.
Challenge: Continue in this direction, developing formal methods for real code, not just for abstract models.
Topic: End-to-end verification, e.g. DSLs like simulink, taking the verification all the way through to machine code (using PCC, certified compilers, etc.). Much to be done in combining deductive verification with static analysis.
Yet, all this progress in verification has a downside: less incentive to come up with better languages and programming models. E.g., if you throw enough verification technology at C programs, you can make them reliable, so why bother. Of course, this is a fallacy: better languages improve our ability to verify AND program. So I'm happy to see language design work going on (e.g., what I saw at DAMP yesterday and in talks today).
Parallel programming is back. We could spend the next 15 years updating our verification technologies to deal with shared-variable concurrency in C, Java, C#, etc. Or we could work to come up with better programming models for parallelism. (Transactions, data-parallel constructs, message passing [my personal favorite], ...) AND we will come up with beter static analysis and verification technologies.
If you have an itch for designing a new programming languiage, scratch it: it might be the next big thing.
Martin Rinard (represented by Arvind)
"Almost All Software is Over-Engineered"
What dop you mean by overengineered?
* Few errors
* Good reliability
The SPECS are bad!
What's bad about overengineering? Wastes money.
Why is software overengineered? Because of the fear of behavioral discontinuities because of small bugs. Exacerbated by brittle language semantics. And perfectionism on the part of programmers.
So what to do? Reduce perfection to optimize output.
Step one: Distribute perfection appropriately.
Use verification to make some parts perfect.
Isolate perfect from imperfect parts.
Step two: Identify properties of interest
Step three: Invent transformations that map a program to the closest program with some property of interest
Two strategies:
- unsound transformations
- restricted properties and programs (start small, generalize over time)
Benefits of property enforcement
Eliminate discontinuities
Obtain guanrantees that cannot be obtained in any other way (in practice)
Wildcard: Hardware challenges might change the current state of things so that software is largely "good enough" anymore:
* parallelism
* mobile devices
* hardware errors
* make developing good software much harder
Kathryn McKinley
We won. Abstraction has made everybody's life better. Since Backus's Fortran in 1954 we've come a long way. Why? Hardware got faster, so the costs of abstraction kept getting hidden.
Have we laid the groundwork for parallel programming? No. We took the most important thing from the hardware and either exposed it in a hideous way (MPI) or else abstracted it away (transactions).
Parallelism and communication must be linked together intimately, and no programming abstraction has ever done it. The closest attempt was High Performance Fortran, which failed.
Challenge: Do a better job with this...
Greg Morrisett
Obvious things to talk about: concurrency and verification.
About concurrency, we can have the most impact not in language design per se but in architecture. We looked at concurrency and parallelism 30 years ago and nobody adopted it. But now people are working on language designs for today's architecture rather than those of 10-15 years from now.
E.g., homogenous multi-core machines are doomed for many reasons (power, economics, etc.).
Shared memory is doomed: It does not scale. It's got to be message passing at some level. Do we want to expose this? Or hide it and present a shared memory abstraction?
Data organization, layout, orchestration will be the important aspects of langauges of the future?
For architects: What should an ISA look like, in the age of JITs? We can now further isolate the ISA from the higher levels of the system (think LLVM) -- where do you draw the line? What are the right abstractions? (SSA is excessively scalar oriented.) What is the right intermediate language? How should hardware adjust? Should we get away from register machines? We're about to see a radical rethinking of architectures, and PL people can contribute.
If you're going to radically change the architecture, how do you evaluate it? By running old C programs on it? No, you need things like C-- and LLVM. The architecture world has done a crappy job of this, but we haven't helped by giving them the tools they need.
A radical idea: Throw memory away. Why aren't we looking at more things like content-addressable memories? Moving computation to the data.
Another challenge: How do you program the coming world of lots of robots. E.g., bio-programming. What's it going to look like? Lots of small devices with noisy sensors and inaccurate actuators.
Another: The gap between PLs and HCI (Human-Computer Interaction). A PL is really a medium for communication among humans, but we don't really have principles for evaluating our languages in this sense. (But don't trust HCI people to know how to test things effectively.) E.g. How do we take second-order learning effects into accounts of the effectiveness of a new language?
A final one: Education. We do a crappy job of teaching undergraduate PL. We don't have a standard textbook. We don't have a unified view of what is important. Creates a challenge both for teaching students and for informing others in computer science about PL.
Discussion following remarks
Simon PJ: One very understudied thing: Visualization tools for understanding behavior of parallel programs are sadly lacking. Good research topic.
[Kathryn: See my ParaScope project from 15 years ago!]
Zhong Shao: One purpose of the panel is to try to answer the annoying question from other people, "What are you doing? What are the burning questions? Are you doing a moonshot?" In particular, how do we answer this question effectively to funding agencies?
Simon: How about "Making the fabric out of which future civilization will be built"?
Xavier: Secure software. High assurance.
Kathryn: We are design the media through which people express solutions to all of society's problems. If you don't have the foundations right, you're missing innovation opportunities.
David Monnieux: Gilles Dowek and Gerard Berry are trying to convince the French government to make PL a part of french secondary education. Their argument is that programs are brittle: you do something a little bit wrong, and something very bad happens. So it's important to teach formal reasoning.
???: One point missing from the discussion is the inter-operability of languages. Seems there has been little progress in this area.
Andrew: MPI-style coding helps because code in different languages doesn't need to interact directly
Kathryn: I agree! [with questioner] We're going to see more multi-lingual systems in the future. Need to address this issue.
Donna Malayeri: Comment: Interesting to hear Greg talk about PL + HCI. But I think your comment shows some of the biases of our community. What should we do to create opportunities for collaboration?
Greg: Actually, in security conferences there is starting to be much more emphasis on usability of notations. But I really hate the first-order studies that don't take learning into account. (E.g., so many user interfaces are designed for the beginner, not the expert.)
Shane Markstrom: How do we keep from losing ideas in the churn of research.
Greg: The current "cycle" model seems OK. E.g., transactions were hot 15 years ago, currently getting rebooted.
Arvind: Context is very important. I gave hundreds of talks in the 80s about how parellism was going to be ubiquitous in the next decade. [Appel: My father always said being right early is the same as being wrong.] Now there's huge interest.
Jonathan Sobel: A story about correctness (and our over-focus on it) from industry. I was working on a collaborative system where the data was occasionally a bit out of date. The programmers had a lot of trouble being comfortable with this, and with the fact that it doesn't matter too much. Yet, it was necessary for scalability. Ultimately, what mattered in that system was that the data was correct *eventually*; didn't need to be correct immediately.
Arvind: To reinforce this point, there is a lot of formality in Martin's work: If you're going to do unsound transformations, you have to be very clear about what you ARE doing.
Andrew Myers: Another issue to toss on the table. Another thing that's been creeping up on us is the size of codebases we're talking about. Software systems are immense now. We're a bit victims of our own success: people can use good modularity mechanisms (types, etc.) to build huge systems -- so huge that we can't understand them any more. So, challenge: what sorts of mechanisms could we be working on to help design programs an order of magnitude bigger than what we have now? (E.g., languges don't help when the number of MODULES in a program gets too big.) There are complex contracts between the parts of a big system, and our languages are not helping us.
Kathryn: Martin would say that we CAN'T understand a 50M line program. Understanding is spread thinly over many people.
Andrew: Complex (implicit) contracts/interfaces cannot be captured in the languages in which the code is written.
Gabriel Des Reis: None of the questions were about education. Very hard to get freshmen or sophomores to understand that CS is not about game development. We're doing a crappy job.
Dan Spoonhower: A more specific challenge, in response to all the points about concurrency/parallism. Different processors have very different caches, but we still write programs that get pretty good performance on lots of them. What is the right abstract model of performance to use to get good behavior on a wide range of parallel machines?
John Hughes: David Turner once said that what was needed was an order of magnitude improvement in programming productivity, and functional languages can deliver it. This was very exciting, and the promise has largely been delivered. But where is the next order of magnitude coming from? Are there any more big ideas out there that promise this kind of improvement? Or is programming as good as it's going to get?
??? (from CMU): Software is collaborative: what can we do with languages to help humans work together more effectively?
Armando Solar-Lezama(?): It's easy to overlook the successes of the community -- how many of today's programs that could not be built in older, worse languages. This "enabling" can serve as a measure of a programming language (or set of languages) success. What are the applications that we want to write now but can't?
Thorsten Altenkirch: Verification was mentioned, but what was missing was the issue of verification certificates. We should have an "economy of proof." It should be possible to make money producing such certificates.
Arvind: There's an emerging market in selling ASSERTIONS (because they are very hard to write). E.g., specifications of standard hardware buses.
Simon: Expanding on the earlier remark: this stuff really IS the fabric of future society. It's like we're building the empire state building out of matchsticks. Let's invent steel!
Wednesday, January 21, 2009
Subscribe to:
Posts (Atom)