Archive

Archive for the ‘java’ Category

JSplitPainInTheAss: A less abominable fix for setDividerLocation

June 12th, 2011 9 comments

Update: Some discussion of this idea and code updates are in the gist here. Hubris.

If you use Swing much there will come a time when you’d like to create a splitter, aka JSplitPane. Overall, it works fine, but it has one lame, lame, lame feature; if you want to set the divider’s initial position proportionally, you’re screwed. I say feature, because the documentation for JSplitPane.setDividerLocation clearly states that it won’t work if the component hasn’t been “displayed” yet.

I bet that 95% of the time the programmer using this function is calling it at start-up before anything is displayed. Way to cover the 5% case there.

Anyway, you’ll go to Google and try to find a solution to this problem and you’ll find a lot of discussion and many variations on the same workaround. But they inevitably rely on sub-classing (isn’t that the solution to everything in Swing?) and hacking in some logic to fix things right before the splitter gets painted the first time. Nasty.

I’ve been thinking about this in terms of Seesaw, a Clojure/Swing API I’ve been working on. I want setDividerLocation to cover the 95% case and I don’t want to sub-class anything if I can help it. It’s not composable. So I thought about it a bit and came up with a much more elegant solution that works on a vanilla JSplitPane. Since I think this will be much more interesting to the Java Swing developers toiling away out there, I’ve coded it in pure Java. Here it is, a simple function:

If you can’t see any code, look here.

By the way, this code is insanely shorter in Clojure. Just sayin’.

The basic idea is to keep putting off the call to setDividerLocation using invokeLater until the splitter is realized on the screen. No fuss, no muss. … strike that. The invokeLater approach did indeed work, but under certain circumstances, a split pane that wasn’t displayed very soon would result in an infinite cascade of invokeLater events. The main problem with this is that the processor never gets to sleep, etc, etc. So I came up with an alternate approach which takes advantage of a HierarchyChange event to detect when the split pane is displayed (learn something new every day) optionally followed by a resize listener because the hierarchy event may arrive before the split pane’s been laid out by its parent. Swing’s exhausting.

Cheers!

Categories: java, swing Tags: , ,

Java Concurrency Pitfalls (in Scala) Answers

May 10th, 2011 3 comments

Recently, Cay Horstmann posted a dozen Java concurrency pitfalls ported to Scala for extra pitfalliness. I had fun figuring them out. Here’s what I came up with.

I won’t copy all the code here, just provide my answers.

  1. The var stop isn’t synchronized in any way (synchronized, AtomicBoolean, etc) so there’s a chance that when it’s set to false, the change won’t be seen in the thread.
  2. Here, stop is an AtomicBoolean, fixing the issue in problem #1. That’s good. However, if doSomething() throws an exception done.put("DONE") will never execute and done.take() will wait forever.
  3. Here, Thread.run() is called rather than Thread.start(). The so-called “BackgroundTask” will actually run in the “foreground”
  4. ConcurrentHashMap.keySet().toArray will give a “weakly consistent” snapshot of the keyset at the time of the call. By the time it returns, the keys in the map may have changed completely.
  5. Oops. A string literal is used for the lock meaning that all instances of Stack will most likely share the same lock.
  6. Oops. A new “lock” is created every time push() is called, which is just as good as no lock.
  7. Here values is mercifully synchronized, but the var size isn’t. That is, if size is used in any other methods it may not be synchronized correctly.
  8. A do/while is used for the condition variable rather than just while. If cond is in a signaled state initially, the condition that size is zero may not be checked.
  9. If out.close() throws an exception, the myLock.unlock() will not be executed.
  10. Here, a string is being added to a blocking queue within a UI event handler. If the queue is full, queue.put() will block causing the UI to become unresponsive.
  11. I’m not totally sure with this one. I can think of a few things that could go wrong. First, if a listener is added or removed while fireListeners() is executing, you could get a ConcurrentModificationException. I think so anyway. I wasn’t able to find any indication of how ArrayBuffer iteration handles this. Second, since the listeners are notified with the lock held, you’ll almost certainly get a deadlock eventually, usually a lock inversion with some other thread that’s calling SwingUtilities.invokeAndWait().
  12. Ridiculously, SimpleDateFormat is NOT THREADSAFE! So using the same formatter from multiple threads is a recipe for sadness.

I wonder what I missed.

Cheers!

Ten Little Soul Crushing Features of Java

April 8th, 2009 6 comments

There are a lot of big things (lack of closures, type inference, etc, etc) to dislike about Java. This is a list, in no particular order of little things that make day to day Java development just that much more irritating. Most of these are just convenience methods whose omission is unforgivable. Libraries exist to address all of them, but that’s beside the point.

1) No string join method

How many times have I written this?

2) No way to set additional JVM parameters in manifest of executable jar

It’s really convenient to just double-click a jar … until you need to set your heap size or something and you have to go crawling back to a shell script.

3) java.net.URL constructor throws checked MalformedURLException rather than unchecked IllegalArgumentException

Seriously, what is so special about this exception?

4) java.io.File has no getExtension() method

Same as join() above.

5) No immutable collections.  Collections.unmodifiableList() and friends don’t count

6) java.io.File.delete() silently does nothing if you try to delete a non-empty directory

It’s even more irritating to write deleteFolder() than join() and getExtension().

7) java.util.Random has a setSeed() method, but no getSeed() method

8) java.util.logging is built-in and incredibly lame

They sucked every ounce of the joy out of log4j.

9) JTree selection behavior gives me a headache every time I have to deal with it

10) No method to just read an entire InputStream or Reader into a byte array or string

Same as join() above.

Bonus) No method to copy a file!


Categories: java Tags:

Importing Multiple WSDLs with Maven

March 11th, 2009 22 comments

The jax-ws-maven plugin for Maven includes  the handy wsimport goal. This goal will take a WSDL from a URL or file and generate Java bindings for the service described.  The generated code may not be beautiful, but it works. Anyway, today I spent more than an hour fiddling with wsimport. In my case I was trying to import multiple WSDLs into my project with different target packages. I ran into several hurdles and figured that I’d document them here for future victims.

First, for better or worse, I’m working in NetBeans.  The Maven support is passable, but out of the box, the error reporting leaves a lot to be desired. In particular, detailed error reporting must be enabled. Otherwise, when there’s an error in your pom.xml file, you’ll get a nice <Badly formed Maven project> error with no other explanation. To enable error reporting go to Tools->Options->Miscellaneous->Maven, and check “Produce Exception Error Messages”. That will make your life easier. Now, about wsimport…

The obvious way to import multiple WSDLs is to include multiple executions in the jaxws-maven-plugin section of pom.xml. In fact, this is even the right way to do it… but if you just take your single-WSDL example, stolen from some tutorial somewhere, and copy it, you’ll end up with a couple of problems. First, when multiple execution are present, they must each be given a unique id, using of all things, the <id> tag.  This wasn’t that tough to figure out once I turned on error reporting as described above.

The second issue was more problematic and, in my opinion, probably a bug in the plugin. Each execution includes a “staleFile” which is used to manage dependencies, i.e. correctly recompiling WSDL when it changes. However, it happens that when multiple executions are present they use the same staleFile.  This means that the import of the second WSDL always thinks it’s up to date and thus never runs.  After a bunch of googling, I managed to find a solution in this bug report. So, the solution is to manually set a staleFile for each WSDL. Here’s the resulting <plugin> block from pom.xml:

<plugin>
   <groupId>org.codehaus.mojo</groupId>
   <artifactId>jaxws-maven-plugin</artifactId>
   <executions>
      <execution>
         <id>FirstWsdl</id>
         <goals>
            <goal>wsimport</goal>
         </goals>
         <configuration>
            <wsdlLocation>http://localhost:8080/FirstWsdl?wsdl</wsdlLocation>
            <wsdlFiles>
            <wsdlFile>path/to/FirstWsdl.wsdl</wsdlFile>
            </wsdlFiles>
            <packageName>com.example.first</packageName>
            <!-- Without this, multiple WSDLs won't be processed :( -->
            <staleFile>${project.build.directory}/jaxws/stale/wsdl.FirstWsdl.done</staleFile>
         </configuration>
      </execution>
      <execution>
         <id>SecondWsdl</id>
         <goals>
            <goal>wsimport</goal>
         </goals>
         <configuration>
            <wsdlLocation>http://localhost:8080/SecondWsdl?wsdl</wsdlLocation>
            <wsdlFiles>
            <wsdlFile>path/to/SecondWsdl.wsdl</wsdlFile>
            </wsdlFiles>
            <packageName>com.example.second</packageName>
            <!-- Without this, multiple WSDLs won't be processed :( -->
            <staleFile>${project.build.directory}/jaxws/stale/wsdl.SecondWsdl.done</staleFile>
         </configuration>
      </execution>
   </executions>
</plugin>

That’s it. Cheers.

Categories: java Tags: , , ,

Porting Soar to Java or: How I Learned to Stop Worrying and Love Spaghetti (Part 2)

December 26th, 2008 No comments

In the previous installment of this series, I wrote about some of the challenges of the initial port of the Soar cognitive architecture from C/C++ to Java. As I noted then, the approach I chose was bottom-up with minimal refactoring. With a couple months of work, I converted about 40k lines of C++ code to about 40k lines of Java code.

Actually, the overhead of stronger typing, lack of macros and unions made the Java implementation generally a bit larger in terms of lines of code. I think the ability to reliably browse the code in Eclipse more than made up for the bloat.

Moving Spaghetti Around The Plate

The original Soar code base is an amalgam of different programming styles reflective of its history as a university research system. There are hints of object orientation as well as functional aspects (it was originally implemented in Lisp, of course), but for the most part it’s good old procedural code. Open data structures with various free functions performing operations on them. The code base itself is broken up into compilation units along mostly functional lines. There’s decide.cpp, which deals mostly with the decision process: substates, impasses, the goal dependency set, etc. There’s symtab.cpp which deals for the most part with allocating and wrangling Soar symbol structures. And on and on…

Of course, you need an object to kind of tie all these pieces together. In the case of Soar, there is the agent struct, aka One Struct To Rule Them All. The agent struct lives in agent.h of all places and is 639 lines of deliciously public members. Here’s a taste:

typedef struct agent_struct {
  /* After v8.6.1, all conditional compilations were removed
   * from struct definitions, including the agent struct below
   */

  /* ----------------------- Rete stuff -------------------------- */
  /*
   * These are used for statistics in rete.cpp.  They were originally
   * global variables, but in the deglobalization effort, they were moved
   * to the (this) agent structure.
   */
  unsigned long actual[256], if_no_merging[256], if_no_sharing[256];

  unsigned long current_retesave_amindex;
  unsigned long reteload_num_ams;
  alpha_mem **reteload_am_table;

  // ... #### 615 lines omitted for sake of brevity #### ...
  // JRV: Added to support XML management inside Soar
  // These handles should not be used directly, see xml.h
  xml_handle xml_destination;		// The current destination for all XML generation, essentially either == to xml_trace or xml_commands
  xml_handle xml_trace;				// During a run, xml_destination will be set to this pointer.
  xml_handle xml_commands;			// During commands, xml_destination will be set to this pointer.

} agent;
/*************** end of agent struct *****/

It’s a beast and it’s passed to just about every function in the system just in case that function may need access to just about anything.

In the interests of sanity, I took a fairly naive approach to the port. For each compilation unit (cpp file) I:

  • Created a Java class
  • Created a Java method for each function in the cpp file
  • Created Java member variables for each member of the old agent structure that seemed to be accessed more or less exclusively by that module

This approach gave me the warm and fuzzy feeling that I was breaking up that awful agent struct and make the system more modular. All my dreams of refactoring the spaghetti of the Soar kernel into a highly modular, easily extended and tested system were coming true…

Ok, maybe not. As I mentioned above, the kernel was only broken up across cpp files along functional lines. This meant that any member variable that I chose to move from the agent structure to the Java class corresponding to the cpp file still had to be public because it was likely that several other modules accessed it however they wanted.

I had taken a 10 Lbs wad of spaghetti and delicately teased it into 10 or so 1 Lbs wads. Each of these spaghetti-lets still maintained an array of strands connecting it to most of its siblings. I think a diagram is in order.

Here’s what I started with, 10 Lbs of spaghetti:

10 Lbs of Spaghetti

10 Lbs of Spaghetti

And here’s what I ended with, 10 little 1 Lbs spaghetti monster babies:

1 Lbs Spaghetti Babies, 10 of them

1 Lbs Spaghetti Babies, 10 of them

See what I mean? I’m really no closer to object orientation, encapsulation or anything. And, of course, the punchline is that I need a top-level object to stitch all these babies together. Can you guess what it’s called?

So, I have an Agent class. It contains a bunch of “module” objects which are all intertwined with each other and have to be public so that everyone can get at each other’s parts.  I’m pretty sure there’s a code smell here, but I can’t quite put my finger on it…

I have actually have two goals here. First, I want to build a public interface for jsoar that is clean and clear and suitable for integrating intelligent Soar agents into cool systems. Second, I want an agent that’s nicely modularized and encapsulated so that the rete can be used (and tested!) on its own, etc. Of course, I don’t want to over encapsulate either. Soar is first and foremost a research system which, in my opinion, means that encapsulation can often get in the way of getting things done.

For the first goal, a clean interface, I want the Agent class to be straightforward without a bunch of yucky public members or just as yucky public accessors.  I also want an interface that will allow me to refactor all these modules slowly over time without impacting external clients. Here I’ll describe my current approach to solving these two problems.

Using the Adapter Pattern to Hide Your Spaghetti

First, how to I give access to private members without cluttering up the interface with a bunch of getters?  For this problem, I chose to use the adapter pattern used liberally by the Eclipse framework. The basic idea is an interface like this:

public interface Adaptable
{
    Object getAdapter(Class<?> klass);
}

The getAdapter method takes a class as an argument and returns an instance of that class. Basically, you’re asking the adaptable object to turn itself into something else for you. In the case of the jsoar Agent, this is a great way to give access to internal modules without cluttering up the API. When one module needs access to another internal module, it can just ask for it by class name:

Decider decider = (Decider) agent.getAdapter(Decider.class);

Here Decider is an internal class. If you happen to know the password (Decider.class) you can get access to it. If you’re just a casual client building another demonstration of Missionaries and Cannibals, you’ll never be tempted by that public getDecider() method, because it’s not there. Yay!  This could also be implemented with a map and string keys, but I kind of like the adapter approach for its simplicity and type safety.

I realize I could also introduce an Agent interface where the private implementation has all the accessors and public members you could want. I will probably add such an interface as well, but I still like the approach of accessing this stuff only through the adapter. If also clearly illuminates the numerous dependencies between the internal modules in a way that I think getters would hide. It’s psychological :)

Hey, I was Eating That! Twiddling Your Secret Spaghetti

Now, there are a lot of places where an external client would like to twiddle the private parts of various internal modules. For example, to change the “wait on state-no-change” setting, client code really needs to be able to access Decider.waitsnc, which is a boolean member variable. Well, it seems like I just cut off that route in the previous section. Besides, I’m not really married to this whole Decider class thing anyway. It’s a monster and should probably be broken up into several smaller objects.  I could just add a getter/setter pair to the top-level Agent class.  There are dozens of these parameters though and I don’t want them cluttering up the interface.

My solution to this is a simple multi-layer property system. It provides type-safety as well as a affordances for high-performance parameters that are accessed frequently in inner loops. First we start off with a generic class that describes a single parameter/property, a PropertyKey. It’s basically like this:

class PropertyKey<T>
{
    public String getName();

    public T getDefaultValue();

    // ... etc ...
}

A PropertyKey is an immutable object. Instances are built with a convenient builder interface. They are meant to be instantiated as constants, i.e. static and final. A PropertyKey acts as a key into a map of property values managed by, of all things, a PropertyManager:

class PropertyManager
{
    public <T> T get(PropertyKey<T> key);
    public <T> T set(PropertyKey<T> key, T value);

    // ... etc ...
}

As you can see, this is all nice and typesafe. Now, what if we have a property that’s a flag, like “learning enabled” that’s checked frequently by internal code. In this case, for performance, we don’t want that inner loop constantly doing a map lookup, not to mention boxing and unboxing of the value. Enter the third interface, PropertyProvider:

public interface PropertyProvider<T>
{
    T get();
    T set(T value);
}

A property provider holds the actual value of the property rather than holding it directly in the property manager. Thus, in the Chunker module, our learning flag can be managed with a simple inner class:

public class Chunker
{
    // ...
    private boolean learningEnabled;
    private PropertyProvider<Boolean> learningEnabledProvider = new PropertyProvider<Boolean>() {
        public Boolean get() { return learningEnabled; }
        public void set(Boolean value)
        {
            learningEnabled = value;
        }
    };

Now, high-frequency code can access the learningEnabled member directly (through the getAdapter() back door), while low-frequency client code can access it through the PropertyManager interface. As a bonus, the property provider can do additional bounds checking on parameters and other fancy stuff. Best of all, our Agent interface isn’t faced with an ever growing set of arbitrary accessors. New properties can be added as needed without affecting other code. In fact, they can be added at run-time, if that’s ever necessary.

Oh, there’s more

So. Now I’m at a point where I have a pretty clean public interface for building jsoar-based systems. Beneath this clean API lurks a bunch of baby spaghetti monsters just dying to be refactored. I haven’t quite firgured that part out yet and so, I’ll have to leave that story for another day.

Categories: java, soar, software engineering Tags: , ,

JRuby … Testify!

December 19th, 2008 No comments

Recently on the JRuby mailing list, a call went out for all the crypto-jrubyists to come out of their holes and tell the world about their experience with JRuby. I believe the call was inspired by this blog post by Brian Tatnall. So, I figured I’d throw in my two cents.

Unlike Brian, I am not someone who “works with Rails every day”.  In my job, I do a lot of Java, C++, Tcl, and maybe some Python when I get lucky. I’ve heard a lot of good things about Ruby, but have never had the chance to try it out on a project. Adding yet another language to one of our systems would just be mean.  So I tried Ruby a few times, liked it, and then forgot it.

The Opportunity

Anyway, a few months ago an odd (for my company) project came along that required us to build a small webapp that does custom indexing and searching of a document repository.  I should mention that not only am I a total Ruby noob, but I’m also a total webapp noob… In any case, this seemed like the perfect chance to learn something new and a colleague of mine had been playing around with Rails at home so we at least had that.

Some initial work had already been done using Lucene in Java and after talking with the customer, we learned that they wanted to deploy the app on Tomcat 5.5. Enter JRuby.

The Development

We decided to go with JRuby (version 1.1.3 at the time). Since JRuby allows Ruby code to call into existing Java libraries, I decided to stick with the existing Lucene code in Java. Thus, I built a back-end indexer and query API in Java, while we built the web front-end in JRuby and Rails.

Honestly, it was almost too easy. Rails makes building the webapp sickeningly easy, especially for something as straightforward as what we were doing. Calling the Java-based query interface I had built was also a snap. I just imported my jar and started calling methods. The conversion from Java objects to Ruby objects was mostly seamless. I think I only had to resort to a couple of map calls to turn lists returned by the query interface into objects that Rails could use in its view templates.

I should note that jruby is actively supported by NetBeans and apparently in Aptana on Eclipse as well.  For everything I was doing with Rails, I stuck with vim and the command-line. Rails already does way more magic than I understand and an IDE would just add one more layer to my ignorance.

I think the only “hack” I ended up with was a static list of query objects stored in Java and accessed from JRuby. Since the Rails controllers are stateless, I couldn’t figure out where I should store global app data, so for expediency, I put them in a global map. I’m very bad. I chalk this up to total ignorance of Ruby and Rails (more on this below).

The Deployment

We finished the first phase of development way under budget largely thanks to JRuby and Rails. Yay!  This turns out to have been a good thing, because we needed that extra time when we went to deploy. Recall that we were going to deploy to the customer’s Tomcat server.  Luckily, the Warbler tool makes this process mostly painless. Given a Rails app, Warbler packages everything up and generates a war file.

I had a few issues with Warbler. First, there was some inconsistency in what command to actually run. In some documentation, it said to “warble”, when the command was really “warbler”, or vice versa, I don’t remember. Second, it took me a little while to figure out the right method of specifying the gems and Java libraries that my app depended on. I recall spending a day fiddling with Tomcat, trying to make sure my jars were discovered as well as the right JDBC ActiveRecord adapter, etc. I think this again has more to do with my ignorance than any actual problems with Warbler.

So I set up a box that mirrored the customers setup and documented the deployment process. Unfortunately, the customer was in another state which meant the install was actually going to be done by a coworker in that state who knew even less about this stuff than me. It took her a day or two of phone calls clarifying everything I’d missed in my installation guide, but eventually, everything worked and we had a working webapp!

I should also mention that more than half of the install time was spent on issues with my Java indexer. JRuby wasn’t really much of a problem.

The Conclusion and p.s. I Don’t Know Ruby

So, the project was really a success. Even with the deployment hassles, we were still under budget, impressed the customer and were able to get follow-on funding to add features. Thanks JRuby!

However, while I was working on the project, I noticed a phenomenon I had read about here. I had built and deployed Ruby on Rails app and can safely say that I barely know Ruby any better than I did when I started. Rails does so much work for you that you can get by without really knowing any Ruby at all. That’s a little sad, since projects like this are usually the only time I get to really learn a new language in any depth. Maybe I need to start with a pure Ruby app. I’ve been thinking that a lot of the Swing UI code I’ve been doing for the jsoar debugger would be quicker and more educational if it was done in jruby, or one of the other JVM-based languages that have been multiplying like rabbits lately. We’ll see.

Others

Here are other JRuby testimonials and I’ve come across:

Categories: java, jruby Tags: , ,

My First Open Source Release

December 12th, 2008 No comments

This week, I released the first version of jsoar, my Java implementation of the Soar kernel.  Obviously, the Soar community is quite small, so the release is fairly low-key and low-stress.  Still, it’s the first time I’ve really done a release of an open source project of my own.  For promotion, I sent out an announcement to the mail Soar mailing list.  I also gave a brief introduction and demonstration during lunch at my job, a Soar shop.  Overall, I think it was well received. Everyone seemed engaged and maybe even excited to give it a try.  I even got an unexpected, but very nice, pat on the back.

Probably the most interesting thing to come out of the release though was my choice of version number.  This may have been foolish, but I released jsoar as version 0.0.1.  One of my problems is a fear of overstating or exaggerating something, so I think my reasoning for this decision has something to do with managing expectations.  I don’t want someone to download it and be disappointed because the version number made it seem like more than it was.

That said, I was immediately chastised by some colleagues for choosing such a low version number. In their opinion, 0.0.1 says “I’ve barely finished writing the first module and you’re lucky if the code even compiles”.  I guess they have a point. Now that I think about it, I would think the same thing if I came across an open source project with a single 0.0.1 release.  jsoar is actually fully functional and ready to be used in real Soar projects, at least projects tolerant of a little risk.  Because it’s a direct port, a lot of the code ain’t pretty, but by the same token, it benefits from 20 odd years of debugging and optimization.

As Steve Yegge has pointed out, marketing is actually a pretty important skill for developers. So, I’m learning that lesson again.  Maybe in a couple weeks I’ll put out a new version and call it Soar 10.0, or jsoar 2009. Ok, maybe not that, but I think 0.6.0 is probably a good compromise. I think that says “this system is functional, but don’t be surprised if I change a bunch of stuff on you before the next release”, which is really what I was going for in the first place.

A brief postscript: My release timing seems fortuitous. The next day, another message was posted to soar-group asking if anyone had successfully compiled Soar in 64-bit mode. Sadly, the answer is no owing to the C implementation’s frequent abuse of pointers and other architecture dependent features. jsoar, of course, has none of those problems and 64-bit support was one of my initial selling points of a Java implementation…

Running javadoc Ant task from Eclipse

December 9th, 2008 25 comments

All things being equal, I like projects that build out-of-the-box. That is, given a clean checkout from revision control, a project should just build without requiring too much customization: setting environment variables, installing third party software, modifying the system path. I’m especially sensitive to this at the moment because I’ve just finished up five days (actually maybe 30 hours all together) getting one particularly horrible system to build.

Along these lines, I added a javadoc task to an Ant build script today and tried running it from Eclipse. Just for the record, that procedure is as follows:

  • Open build.xml
  • Right-click the task in the Outline View
  • Select Run As->Ant build.

Interestingly enough, this failed with the following error:

build.xml:208: Javadoc failed: java.io.IOException: Cannot run program
"javadoc.exe": CreateProcess error=2, The system cannot find the file
specified

A quick Google search reveals several suggestions that the solution is to make sure that javadoc.exe is on the system path.  First, it’s a little ridiculous that Ant can’t find javadoc from JAVA_HOME when it clearly uses the same mechanism to track down javac. Oh well. Bygones. Second, returning to the idea of builds that “just work”, I don’t want to modify my system path. What if I have several JDKs installed, used with several different projects simultaneously?

So, how do we get javadoc onto the system path without modifying it? Simple, modify the path in Eclipse. This time, run the Ant task with the following procedure:

  • Open build.xml
  • Right-click the task in the Outline View
  • Select Run As->Ant Build …

That elipsis at the end is important. This will bring up the Eclipse launch configuration dialog. Give your new launch configuration a name, like “Build <Project Name>” or something and switch to the Environment tab. Here you can specify the environment for Ant. But we don’t want to kill the whole system path, just prepend the location of javadoc.exe to it. So click New… and enter Path for the name and the following for the value:

   ${env_var:JAVA_HOME}/bin;${env_var:Path}

This prepends JAVA_HOME/bin to the current system path. Now click Run and everything should work fine. Yay.

Now, when someone else checks out the project you don’t want them to have to go through the same hassle. It’s still a hassle, just inside Eclipse instead somewhere else on the machine. The solution to this problem is to save the launch configuration!  Return to the launch configuration screen and open the Common tab. There you can select to save the configuration as a shared file. I usually save it in tools/launches. The resulting file will have a .launch extension. Commit the launch file to version control. Now anyone who checks out the project will have a properly configured launch configuration to build the project. No fuss, no muss.

Also note that this is a much more general purpose solution. It applies to any launch configuration where you need to modify the path, or set any kind of environment variables.

Possible Issues

There are a few potential issues I can think of:

  • That semi-colon in the path string may not work on non-Windows systems. I’m not sure if Eclipse is smart enough to fix that.
  • In the past, I’ve had trouble with the case of environment variables and Ant.

Also, I believe that an alternate solution to this problem is to register the JDK in Eclipse. This is ok, but it’s nice to not require it.

Categories: eclipse, java Tags: , ,

Beware of Case-Sensitive Environment Variables in ANT

December 2nd, 2008 No comments

Today I was bitten by a kind of annoying feature of ANT. Sadly, a project I’m working relies on a Swig-generated wrapper for a C++ library, in particular the SML client libraries for Soar. So, it’s important to have all the DLLs in the right spot to avoid link exceptions. The usual, and I think most straightforward, solution is to ensure that the DLLs are on the system path. To this end, I can make a simple bat scripts (or shell script) that sets up the environment and then invokes Java.

However, I try to be good and write unit tests for this stuff too which means that the environment has to be set up correctly for JUnit too, even when invoked from ANT. So, in ANT, how do I extend the system path for the JUnit task while not clobbering the path that’s already there? I did a little hunting around and found the nice <env> tag which is documented in the exec task. Lo and behold, the second example is exactly what I want to do. Here is what I did, translated for junit:

<property environment="env"/>
<junit fork="true" ... >
  <env key="PATH" path="${env.PATH}:${basedir}/vendor/soar-8.6.3/bin"/>
</exec>

Pretty straightforward, right. But, of course it didn’t work right away. After a little more hunting, I figure out the problem is with that <property environment … > tag. It turns out there are a few things going on here:

  • On Windows, environment variable names are case insensitive
  • Properties in ANT are case sensitive
  • The case of the PATH environment variable in Windows appears to be totally random

These factors conspired against me. In my case, my path environment variable was spelled “Path” with a capital Pee. Changing the ANT file to use this spelling fixed everything. And yes, this limitation is documented in the documentation for the property task.

The really sad thing is that I got bit by this less than an hour later when I set up the project on Hudson (really cool by the way). In this case, it appears that Hudson (or something else) changes the PATH environment variable to all lower case when invoking ANT. So, now I have a build script that runs great on my machine, but fails on the build machine. I’m sure there’s a better solution, but for expediency, I went with this monstrosity:

<property environment="env"/>
<junit fork="true" ... >
  <env key="PATH" path="${env.path}:${env.Path}:${basedir}/vendor/soar-8.6.3/bin"/>
</exec>

I wonder if this will ever come back to haunt me… Yet another reason for me to eliminate this C++ dependency.

Categories: java, software engineering Tags: , ,

Use enum to define JTable columns

November 26th, 2008 No comments

Last week while tediously defining another Swing TableModel, I had a little epiphany. Typically, I’d define column headers, types, etc with a list of integer constants, and some arrays:

public class MyTableModel extends AbstractTableModel
{
    private final int NAME_COLUMN = 0;
    private final int VALUE_COLUMN = 1;
 
    private final String NAMES[] = { "Name", "Value" };
    private final Class CLASSES[] = { String.class, Double.class };

    . . .

   public String getColumnName(int columnIndex)
   {
      return NAMES[columnIndex];
   }

   . . .
}

This code is pretty tedious to maintain. In particular, switching column order involves a bunch of changes that are easy to get wrong. How about this instead… use an enum to define the columns!!

public class MyTableModel extends AbstractTableModel
{
    private static enum Columns
    {
        Name(String.class), Value(Double.class);

        final Class klass;

        Columns(Class klass)
        {
             this.klass = klass;
        }
    }

    . . .

   public int getColumnCount() { return Columns.values().length; }

   public Class getColumnClass(int columnIndex)
   {
      return Columns.values()[columnIndex].klass;
   }

   public String getColumnName(int columnIndex)
   {
      return Columns.values()[columnIndex].toString();
   }

   . . .
}

Now rearranging column order just works. Furthermore, you can add whatever column-specific functionality you like as methods on the enum. I think this approach can be generified with an interface for the enum to implement and a new abstract table model base class that can handle all the boilerplate above (getColumnCount(), getColumnName(), etc). When I get around to trying it out, I’ll post an update.

Late breaking: Of course, a quick search reveals I’m not the first person to think of this. Typical.

Categories: java, swing Tags: ,