Two weeks ago, I presented Spring ME at JavaOne. It was an interesting conference in more than one way. Some things were just plain surreal. Larry Ellisson calling onto the OpenOffice community to work on integration of JavaFX? What was that all about? And I haven’t seen the official numbers yet, but the number of attendees must have been an all-time low. Same with the number of parties. 🙁
Nevertheless, I have to say I had a marvelous week. Despite the economic slow-down, the quality of the talks was great, and it surely doesn’t seem to have prevented people from exploring new ideas. From my perspective, the big topics were Java FX (largely Sun pushing), Cloud Computing (lot of vendor push, quite some traction behind it, very little convergence) and other languages on the VM (mostly based on traction from the community).
In summary, I would say that there is pretty widespread agreement that multi-core and cloud computing have exposed some of the weaknesses of the traditional VM, language and enterprise architectures. However, Java is certainly not dead, and it is not going to anywhere soon; instead, it’s very much alive and kicking. However, there are quite a few different perspectives on how Java should evolve from all of this. And since a significant bit of the platform is driven by the community, the shake-out of these different opinions will take some time. Nevertheless, all of these initiatives are surely promising, and there seems to be a strong commons sense of avoiding ceremony. I’m sure something good will come out.
The rest of this entry are just some of the highlights I picked from my notebook.
Crossbow
Network virtualization. Pretty cool stuff. It allows you create a virtual network with virtual machines, and emulate certain network parameters. The GUI for creating these virtual networks is pretty awful (where is JavaFX?), but what’s underneath is smoking! What’s more, it has DTrace built in, so you can drill down into the virtual network layer. Thinking about Cloud computing, it makes perfect sense to virtualize the network as well. Project Caroline had similar ambitions, but was layered on top of the OS. Crossbow makes your OS ready for virtualized networks.
In order to bake network virtualization capabilities right into the guts of the Operating System, Sun actually had to rethink a significant part of the networking stack. The result seems to be a pretty big overhaul of the existing stack, which turned out to have a couple of interesting side effects. By redesigning the existing stack, they got an 200 Mb/s throughput increase for free.
Groovy
Here’s what’s new in Groovy 1.6
- Groovy is faster (3-5 times)
- New syntax sugar: def(a, b) = [0, 1]
- You can omit return statements at the end of the method. (The return value will be the last expression evaluated. Now, where have I seen that before? 😉 )
- It supports compile time meta programming; you can transform the AST at compile time. (Examples are driven by annotations: @Lazy, @Delegate.)
- Grape is a new packaging engine, that AFAICT will automatically download dependencies if you need them; so you can write scripts that depend on external libraries. Those libraries will be pulled down from public repositories automatically.
- Groovy now allows you not only to use annotations, but also to define annotations.
- @Bindable
- Griffion for Rich Internet Applications spawned out into its own project
- Updates to the ExpandoMetaClass
- JmxBuilder
- OSGi support(!)
Da Vinci (The Renaissance VM)
Currently aiming at providing support for invokedynamic (in order to ease the integration with dynamically typed languages), method handles and interface injection. Unfortunately, it’s unlikely that tail call optimization will be included in the next release. That’s a bummer. Scala and Clojure, my hopes for the VM would have seriously benefited from it. And the crazy thing is: it’s already out there. Arnold Schwaighofer produced the code for tail call optimization, so you wonder why it doesn’t get included, just like that. It turns out, it all has to do with the willingness of people to move this through the JCP. (Which makes you wonder about the JCP…)
JavaFX
With JavaFX’ omnipresence at JavaOne, I actually expected to get a little bit more exposed to it, but it didn’t happen. (Did my subconsciousness play a role here? I surely didn’t intend to avoid it at all cause.) Anyhow, one of the things demoed by Nandini Ramani was a new tool for creating JavaFX applications. It looked marvelous; live preview, slick drag & drop, pretty cool. In addition to that, it allowed you to target several platforms, and keep the code in sync. So, to imagine how that works: you create a JavaFX application, you target both desktop and phone, but then – because of the phone’s form factor – you decide to drop some graphics from the phone version. The tool allows to continue to work on the common bits while keeping the scaled down version in sync. I can see how that could be useful. Unfortunately, it is going to take some more time to get the tool out there…
GridGain
GridGain is just way cool. The ease of setting up a computing grid is unprecedented. The talk was pretty good as well. Admittedly, it stayed hovering at the surface, but it was all about simplicity, including the notions of grid and cloud computing:
Grid: two computers working at the same task
Grid Computing: Parallel processing + parallelized data storage
Cloud: Grid Computing + Data Center Virtualization
Couple of interesting other observations:
At Amazon, a 100 ms decrease in latency causes a 1% drop in sales.
At Google, a 500ms drop in latency causes a 20% drop in traffic.
Clojure
I like it. A lot. Finally a language that makes an investment in learning Lisp awarding. The great thing about Clojure is that:
- It recognized the overkill of parentheses in Lisp, and solved that. (In fact, Rich Hickey continuously emphasized that Clojure has less parentheses than Java, which is interesting, if you come to think about it.)
- It a pure functional language.
- It has software transactional memory baked into its guts.
- You can only update state through transactions.
- It supports nested transactions.
- Internally, its software transactional memory is based on bit partitioned hash trees, structural sharing and path copying. Or if you don’t know what that means: it’s pretty fast, with relatively little impact on the heap.
Google App Engine
I only walked in at the end. Interesting to see how they mapped JDO/JPA to BigTable. I guess the key take-away is this: even though it does implement the API, you should carefully consider what is happening behind the scenes. JPA was really intended for Object-Relational mapping. BigTable is not a relational database. Things that will be pretty efficient in JPA on top of a relational database, will be pretty much inefficient when implemented on top of BigTable, and vice versa. (Hmmm, this smells like a future blog topic.)
Spring 3.0
- Java 5 syntax gallore.
- Expression language baked into its guts.
- REST support, pretty much an extension of the new Spring MVC.
- Legacy has been ripped out. (Bye bye TaskExecutor. Welcome java.util.concurrent.)
- Signature of BeanFactory has changed: getBean now accepts both a name and a type token.
- EL supports both bean properties, method invocations and construction of value objects. (I’m not sure if I really enjoy having support for method invocations and construction of value objects into the expression language, but I definitely do not having to use PropertyPlaceholderConfigurer.)
- REST support introduces a couple of new annotations, but also seems to allow you register your own annotations. (I want to look at the abstractions they have for dealing with that, to see if there’s any correspondence with the Preon way of working.)
- Rod also talks about Spring’s Java configuration. I can’t help it, but this stuff just gives me a nosebleed.
- He ends his talk by mentioning Spring Roo, which apparently is Spring with Ruby on Rails type of productivity. I am a bit skeptical.
Collections Connection
- Always fun. Hmmmmm. The notes are a little sketchy on this topic. Well, at least this is coming, at some point in time:
List<String> list = ["a,", "b", "c"]; list[0] = "d"; Map<Integer,String> map = [123:"Foo", 321:"Bar"]; map[123] = "Foobar";
- Apart from that, we will have a new sort mechanism, taken from Python. Something referred to as “stable, adaptive, iterative, merge-sort”. Important message here is that it is pretty fast.
- And then, it turns out there are some problems with the implementation of ConcurrentLinkedQueue.
- There is also lot of talk about Google Collections. (Not surprising, considering the fact that two out of three speakers are from Google.) It provides actual implementations of unmodifiable collections, instead of wrapper classes. And then there is a lot of talk about MapMaker. A builder for building exactly the type of map you need. At first, it seems that it also provides a great building block for caches, but then the authors say that caching requires additional constructs, for instance for cache eviction strategies.
- It also refers to two other libraries that I never bothered taking a look at in the past: Fastutil and Trove.
Concurrency Models
Jonas Bonér gave an interesting talk on different concurrency models. He talks about STM, the actor model and data flow concurrency. Most important observation is that they all exist for a purpose; you need those different models in different circumstances. And Jonas has code examples for each of those circumstances. For STM it’s Clojure based. For the actor model, it’s Scale based (he mentions Kilim as probably being the best Java implementation), and O2 & Alice as examples of programming languages supporting data flow concurrency. And then mentions he did a data flow concurrency implementation in Scala.
Bob Lee on References
Still need to find the slides of this talk. Very thorough talk on references. The slides are interesting, since it clearly illustrates the impact of GCs on references. (Basically, he’s talking through collections and showing the impact on references while the GC proceedes. Very nice.)
JPicus
It made me wonder why I didn’t here of this before. Pretty cool for analyzing what’s happening in terms of I/O. Again, my notes are pretty sketchy here, but this is something you definitely need to give a try. I did however write down one quote:
“If I want to read something nice, I sit and write it myself!”
Mark Twain
I always loved Mark Twain, but this is going to be my motto for 2010.
This Is Not Your Father’s Von Neumann Machine
Done by Brian Goetz and Cliff Click. Packed with assembly code. I think they could have easily talked about this for an entire afternoon. Strangely satisfying for a talk that continuously set of the information overload alert in my head. I think the key message was: if you want to use modern architectures efficiently, make sure you understand how your code affects the CPU caches. Locality matters! Luckily, there is a 114 page document that summarizes the talk. 😉