Archive for December, 2008

Porting Soar to Java or: How I Learned to Stop Worrying and Love Spaghetti (Part 2)

December 26th, 2008 No comments

In the previous installment of this series, I wrote about some of the challenges of the initial port of the Soar cognitive architecture from C/C++ to Java. As I noted then, the approach I chose was bottom-up with minimal refactoring. With a couple months of work, I converted about 40k lines of C++ code to about 40k lines of Java code.

Actually, the overhead of stronger typing, lack of macros and unions made the Java implementation generally a bit larger in terms of lines of code. I think the ability to reliably browse the code in Eclipse more than made up for the bloat.

Moving Spaghetti Around The Plate

The original Soar code base is an amalgam of different programming styles reflective of its history as a university research system. There are hints of object orientation as well as functional aspects (it was originally implemented in Lisp, of course), but for the most part it’s good old procedural code. Open data structures with various free functions performing operations on them. The code base itself is broken up into compilation units along mostly functional lines. There’s decide.cpp, which deals mostly with the decision process: substates, impasses, the goal dependency set, etc. There’s symtab.cpp which deals for the most part with allocating and wrangling Soar symbol structures. And on and on…

Of course, you need an object to kind of tie all these pieces together. In the case of Soar, there is the agent struct, aka One Struct To Rule Them All. The agent struct lives in agent.h of all places and is 639 lines of deliciously public members. Here’s a taste:

typedef struct agent_struct {
  /* After v8.6.1, all conditional compilations were removed
   * from struct definitions, including the agent struct below

  /* ----------------------- Rete stuff -------------------------- */
   * These are used for statistics in rete.cpp.  They were originally
   * global variables, but in the deglobalization effort, they were moved
   * to the (this) agent structure.
  unsigned long actual[256], if_no_merging[256], if_no_sharing[256];

  unsigned long current_retesave_amindex;
  unsigned long reteload_num_ams;
  alpha_mem **reteload_am_table;

  // ... #### 615 lines omitted for sake of brevity #### ...
  // JRV: Added to support XML management inside Soar
  // These handles should not be used directly, see xml.h
  xml_handle xml_destination;		// The current destination for all XML generation, essentially either == to xml_trace or xml_commands
  xml_handle xml_trace;				// During a run, xml_destination will be set to this pointer.
  xml_handle xml_commands;			// During commands, xml_destination will be set to this pointer.

} agent;
/*************** end of agent struct *****/

It’s a beast and it’s passed to just about every function in the system just in case that function may need access to just about anything.

In the interests of sanity, I took a fairly naive approach to the port. For each compilation unit (cpp file) I:

  • Created a Java class
  • Created a Java method for each function in the cpp file
  • Created Java member variables for each member of the old agent structure that seemed to be accessed more or less exclusively by that module

This approach gave me the warm and fuzzy feeling that I was breaking up that awful agent struct and make the system more modular. All my dreams of refactoring the spaghetti of the Soar kernel into a highly modular, easily extended and tested system were coming true…

Ok, maybe not. As I mentioned above, the kernel was only broken up across cpp files along functional lines. This meant that any member variable that I chose to move from the agent structure to the Java class corresponding to the cpp file still had to be public because it was likely that several other modules accessed it however they wanted.

I had taken a 10 Lbs wad of spaghetti and delicately teased it into 10 or so 1 Lbs wads. Each of these spaghetti-lets still maintained an array of strands connecting it to most of its siblings. I think a diagram is in order.

Here’s what I started with, 10 Lbs of spaghetti:

10 Lbs of Spaghetti

10 Lbs of Spaghetti

And here’s what I ended with, 10 little 1 Lbs spaghetti monster babies:

1 Lbs Spaghetti Babies, 10 of them

1 Lbs Spaghetti Babies, 10 of them

See what I mean? I’m really no closer to object orientation, encapsulation or anything. And, of course, the punchline is that I need a top-level object to stitch all these babies together. Can you guess what it’s called?

So, I have an Agent class. It contains a bunch of “module” objects which are all intertwined with each other and have to be public so that everyone can get at each other’s parts.  I’m pretty sure there’s a code smell here, but I can’t quite put my finger on it…

I have actually have two goals here. First, I want to build a public interface for jsoar that is clean and clear and suitable for integrating intelligent Soar agents into cool systems. Second, I want an agent that’s nicely modularized and encapsulated so that the rete can be used (and tested!) on its own, etc. Of course, I don’t want to over encapsulate either. Soar is first and foremost a research system which, in my opinion, means that encapsulation can often get in the way of getting things done.

For the first goal, a clean interface, I want the Agent class to be straightforward without a bunch of yucky public members or just as yucky public accessors.  I also want an interface that will allow me to refactor all these modules slowly over time without impacting external clients. Here I’ll describe my current approach to solving these two problems.

Using the Adapter Pattern to Hide Your Spaghetti

First, how to I give access to private members without cluttering up the interface with a bunch of getters?  For this problem, I chose to use the adapter pattern used liberally by the Eclipse framework. The basic idea is an interface like this:

public interface Adaptable
    Object getAdapter(Class<?> klass);

The getAdapter method takes a class as an argument and returns an instance of that class. Basically, you’re asking the adaptable object to turn itself into something else for you. In the case of the jsoar Agent, this is a great way to give access to internal modules without cluttering up the API. When one module needs access to another internal module, it can just ask for it by class name:

Decider decider = (Decider) agent.getAdapter(Decider.class);

Here Decider is an internal class. If you happen to know the password (Decider.class) you can get access to it. If you’re just a casual client building another demonstration of Missionaries and Cannibals, you’ll never be tempted by that public getDecider() method, because it’s not there. Yay!  This could also be implemented with a map and string keys, but I kind of like the adapter approach for its simplicity and type safety.

I realize I could also introduce an Agent interface where the private implementation has all the accessors and public members you could want. I will probably add such an interface as well, but I still like the approach of accessing this stuff only through the adapter. If also clearly illuminates the numerous dependencies between the internal modules in a way that I think getters would hide. It’s psychological :)

Hey, I was Eating That! Twiddling Your Secret Spaghetti

Now, there are a lot of places where an external client would like to twiddle the private parts of various internal modules. For example, to change the “wait on state-no-change” setting, client code really needs to be able to access Decider.waitsnc, which is a boolean member variable. Well, it seems like I just cut off that route in the previous section. Besides, I’m not really married to this whole Decider class thing anyway. It’s a monster and should probably be broken up into several smaller objects.  I could just add a getter/setter pair to the top-level Agent class.  There are dozens of these parameters though and I don’t want them cluttering up the interface.

My solution to this is a simple multi-layer property system. It provides type-safety as well as a affordances for high-performance parameters that are accessed frequently in inner loops. First we start off with a generic class that describes a single parameter/property, a PropertyKey. It’s basically like this:

class PropertyKey<T>
    public String getName();

    public T getDefaultValue();

    // ... etc ...

A PropertyKey is an immutable object. Instances are built with a convenient builder interface. They are meant to be instantiated as constants, i.e. static and final. A PropertyKey acts as a key into a map of property values managed by, of all things, a PropertyManager:

class PropertyManager
    public <T> T get(PropertyKey<T> key);
    public <T> T set(PropertyKey<T> key, T value);

    // ... etc ...

As you can see, this is all nice and typesafe. Now, what if we have a property that’s a flag, like “learning enabled” that’s checked frequently by internal code. In this case, for performance, we don’t want that inner loop constantly doing a map lookup, not to mention boxing and unboxing of the value. Enter the third interface, PropertyProvider:

public interface PropertyProvider<T>
    T get();
    T set(T value);

A property provider holds the actual value of the property rather than holding it directly in the property manager. Thus, in the Chunker module, our learning flag can be managed with a simple inner class:

public class Chunker
    // ...
    private boolean learningEnabled;
    private PropertyProvider<Boolean> learningEnabledProvider = new PropertyProvider<Boolean>() {
        public Boolean get() { return learningEnabled; }
        public void set(Boolean value)
            learningEnabled = value;

Now, high-frequency code can access the learningEnabled member directly (through the getAdapter() back door), while low-frequency client code can access it through the PropertyManager interface. As a bonus, the property provider can do additional bounds checking on parameters and other fancy stuff. Best of all, our Agent interface isn’t faced with an ever growing set of arbitrary accessors. New properties can be added as needed without affecting other code. In fact, they can be added at run-time, if that’s ever necessary.

Oh, there’s more

So. Now I’m at a point where I have a pretty clean public interface for building jsoar-based systems. Beneath this clean API lurks a bunch of baby spaghetti monsters just dying to be refactored. I haven’t quite firgured that part out yet and so, I’ll have to leave that story for another day.

Categories: java, soar, software engineering Tags: , ,

JRuby … Testify!

December 19th, 2008 No comments

Recently on the JRuby mailing list, a call went out for all the crypto-jrubyists to come out of their holes and tell the world about their experience with JRuby. I believe the call was inspired by this blog post by Brian Tatnall. So, I figured I’d throw in my two cents.

Unlike Brian, I am not someone who “works with Rails every day”.  In my job, I do a lot of Java, C++, Tcl, and maybe some Python when I get lucky. I’ve heard a lot of good things about Ruby, but have never had the chance to try it out on a project. Adding yet another language to one of our systems would just be mean.  So I tried Ruby a few times, liked it, and then forgot it.

The Opportunity

Anyway, a few months ago an odd (for my company) project came along that required us to build a small webapp that does custom indexing and searching of a document repository.  I should mention that not only am I a total Ruby noob, but I’m also a total webapp noob… In any case, this seemed like the perfect chance to learn something new and a colleague of mine had been playing around with Rails at home so we at least had that.

Some initial work had already been done using Lucene in Java and after talking with the customer, we learned that they wanted to deploy the app on Tomcat 5.5. Enter JRuby.

The Development

We decided to go with JRuby (version 1.1.3 at the time). Since JRuby allows Ruby code to call into existing Java libraries, I decided to stick with the existing Lucene code in Java. Thus, I built a back-end indexer and query API in Java, while we built the web front-end in JRuby and Rails.

Honestly, it was almost too easy. Rails makes building the webapp sickeningly easy, especially for something as straightforward as what we were doing. Calling the Java-based query interface I had built was also a snap. I just imported my jar and started calling methods. The conversion from Java objects to Ruby objects was mostly seamless. I think I only had to resort to a couple of map calls to turn lists returned by the query interface into objects that Rails could use in its view templates.

I should note that jruby is actively supported by NetBeans and apparently in Aptana on Eclipse as well.  For everything I was doing with Rails, I stuck with vim and the command-line. Rails already does way more magic than I understand and an IDE would just add one more layer to my ignorance.

I think the only “hack” I ended up with was a static list of query objects stored in Java and accessed from JRuby. Since the Rails controllers are stateless, I couldn’t figure out where I should store global app data, so for expediency, I put them in a global map. I’m very bad. I chalk this up to total ignorance of Ruby and Rails (more on this below).

The Deployment

We finished the first phase of development way under budget largely thanks to JRuby and Rails. Yay!  This turns out to have been a good thing, because we needed that extra time when we went to deploy. Recall that we were going to deploy to the customer’s Tomcat server.  Luckily, the Warbler tool makes this process mostly painless. Given a Rails app, Warbler packages everything up and generates a war file.

I had a few issues with Warbler. First, there was some inconsistency in what command to actually run. In some documentation, it said to “warble”, when the command was really “warbler”, or vice versa, I don’t remember. Second, it took me a little while to figure out the right method of specifying the gems and Java libraries that my app depended on. I recall spending a day fiddling with Tomcat, trying to make sure my jars were discovered as well as the right JDBC ActiveRecord adapter, etc. I think this again has more to do with my ignorance than any actual problems with Warbler.

So I set up a box that mirrored the customers setup and documented the deployment process. Unfortunately, the customer was in another state which meant the install was actually going to be done by a coworker in that state who knew even less about this stuff than me. It took her a day or two of phone calls clarifying everything I’d missed in my installation guide, but eventually, everything worked and we had a working webapp!

I should also mention that more than half of the install time was spent on issues with my Java indexer. JRuby wasn’t really much of a problem.

The Conclusion and p.s. I Don’t Know Ruby

So, the project was really a success. Even with the deployment hassles, we were still under budget, impressed the customer and were able to get follow-on funding to add features. Thanks JRuby!

However, while I was working on the project, I noticed a phenomenon I had read about here. I had built and deployed Ruby on Rails app and can safely say that I barely know Ruby any better than I did when I started. Rails does so much work for you that you can get by without really knowing any Ruby at all. That’s a little sad, since projects like this are usually the only time I get to really learn a new language in any depth. Maybe I need to start with a pure Ruby app. I’ve been thinking that a lot of the Swing UI code I’ve been doing for the jsoar debugger would be quicker and more educational if it was done in jruby, or one of the other JVM-based languages that have been multiplying like rabbits lately. We’ll see.


Here are other JRuby testimonials and I’ve come across:

Categories: java, jruby Tags: , ,

Running Code in Eclipse the Lazy Way

December 15th, 2008 No comments

I’m pretty happy working in Eclipse. Today I’ll share one of my favorite “workhorse” keyboard shortcuts.

So say you’re editing some unit tests (you are writing unit tests, right?) for a particular class and you want to run them. Of course, you’ve set up a launch to run all the tests in your project, but you have so many tests that you don’t want to sit through them all for this one little test. Try this instead:

    alt+shift+X and then T

Yay the tests in the current file were just run. So let’s decode that. “alt+shift+X”. X is for eXecute. Now what is “T” for? Oh yeah, Test. Execute test. That’s way faster than right-click, run as, JUnit test, etc. Or finding the JUnit view and clicking “run again”.

Now what if you wanted to debug? I can guess that the shortcut will end with a “T”. What does it start with though?

   alt+shift+D and then T

Nice. So “alt+shift+D” means debug.

Ok, now if I’m just editing a plain old Java file with a main() method, i.e. I’m testing by hand (bad boy!) what should I do?

   alt+shift+X and then J

Of course it starts with execute, and then “J” for Java. Similarly, if I want to debug:

   alt+shift+D and then J

Cool. Now do that a few hundred times until it’s in muscle memory and you stop using your mouse like a sucker.

Note that these shortcuts not only work in the active editor, but on selected resources. Select a class in the Package Explorer and try it out.

Bonus Shortcuts

So now you’re starting your debug session without the mouse. Don’t waste your time clicking those step buttons:

  • Step Into – F5
  • Step Into Selection – Ctrl+F5
  • Step Over – F6
  • Step Return – F7
  • Resume – F8

I have to admit these have been harder to get down after so many years in Visual Studio. Way it goes.

p.s. I realize these are kind of obvious, but sometimes I don’t do something unless someone tells me to, which may hold for others as well. :) Also there are actually shorter shortcuts for things like debugger (F11 I think), but I like mnemonics.

Categories: eclipse Tags:

My First Open Source Release

December 12th, 2008 No comments

This week, I released the first version of jsoar, my Java implementation of the Soar kernel.  Obviously, the Soar community is quite small, so the release is fairly low-key and low-stress.  Still, it’s the first time I’ve really done a release of an open source project of my own.  For promotion, I sent out an announcement to the mail Soar mailing list.  I also gave a brief introduction and demonstration during lunch at my job, a Soar shop.  Overall, I think it was well received. Everyone seemed engaged and maybe even excited to give it a try.  I even got an unexpected, but very nice, pat on the back.

Probably the most interesting thing to come out of the release though was my choice of version number.  This may have been foolish, but I released jsoar as version 0.0.1.  One of my problems is a fear of overstating or exaggerating something, so I think my reasoning for this decision has something to do with managing expectations.  I don’t want someone to download it and be disappointed because the version number made it seem like more than it was.

That said, I was immediately chastised by some colleagues for choosing such a low version number. In their opinion, 0.0.1 says “I’ve barely finished writing the first module and you’re lucky if the code even compiles”.  I guess they have a point. Now that I think about it, I would think the same thing if I came across an open source project with a single 0.0.1 release.  jsoar is actually fully functional and ready to be used in real Soar projects, at least projects tolerant of a little risk.  Because it’s a direct port, a lot of the code ain’t pretty, but by the same token, it benefits from 20 odd years of debugging and optimization.

As Steve Yegge has pointed out, marketing is actually a pretty important skill for developers. So, I’m learning that lesson again.  Maybe in a couple weeks I’ll put out a new version and call it Soar 10.0, or jsoar 2009. Ok, maybe not that, but I think 0.6.0 is probably a good compromise. I think that says “this system is functional, but don’t be surprised if I change a bunch of stuff on you before the next release”, which is really what I was going for in the first place.

A brief postscript: My release timing seems fortuitous. The next day, another message was posted to soar-group asking if anyone had successfully compiled Soar in 64-bit mode. Sadly, the answer is no owing to the C implementation’s frequent abuse of pointers and other architecture dependent features. jsoar, of course, has none of those problems and 64-bit support was one of my initial selling points of a Java implementation…

Running javadoc Ant task from Eclipse

December 9th, 2008 25 comments

All things being equal, I like projects that build out-of-the-box. That is, given a clean checkout from revision control, a project should just build without requiring too much customization: setting environment variables, installing third party software, modifying the system path. I’m especially sensitive to this at the moment because I’ve just finished up five days (actually maybe 30 hours all together) getting one particularly horrible system to build.

Along these lines, I added a javadoc task to an Ant build script today and tried running it from Eclipse. Just for the record, that procedure is as follows:

  • Open build.xml
  • Right-click the task in the Outline View
  • Select Run As->Ant build.

Interestingly enough, this failed with the following error:

build.xml:208: Javadoc failed: Cannot run program
"javadoc.exe": CreateProcess error=2, The system cannot find the file

A quick Google search reveals several suggestions that the solution is to make sure that javadoc.exe is on the system path.  First, it’s a little ridiculous that Ant can’t find javadoc from JAVA_HOME when it clearly uses the same mechanism to track down javac. Oh well. Bygones. Second, returning to the idea of builds that “just work”, I don’t want to modify my system path. What if I have several JDKs installed, used with several different projects simultaneously?

So, how do we get javadoc onto the system path without modifying it? Simple, modify the path in Eclipse. This time, run the Ant task with the following procedure:

  • Open build.xml
  • Right-click the task in the Outline View
  • Select Run As->Ant Build …

That elipsis at the end is important. This will bring up the Eclipse launch configuration dialog. Give your new launch configuration a name, like “Build <Project Name>” or something and switch to the Environment tab. Here you can specify the environment for Ant. But we don’t want to kill the whole system path, just prepend the location of javadoc.exe to it. So click New… and enter Path for the name and the following for the value:


This prepends JAVA_HOME/bin to the current system path. Now click Run and everything should work fine. Yay.

Now, when someone else checks out the project you don’t want them to have to go through the same hassle. It’s still a hassle, just inside Eclipse instead somewhere else on the machine. The solution to this problem is to save the launch configuration!  Return to the launch configuration screen and open the Common tab. There you can select to save the configuration as a shared file. I usually save it in tools/launches. The resulting file will have a .launch extension. Commit the launch file to version control. Now anyone who checks out the project will have a properly configured launch configuration to build the project. No fuss, no muss.

Also note that this is a much more general purpose solution. It applies to any launch configuration where you need to modify the path, or set any kind of environment variables.

Possible Issues

There are a few potential issues I can think of:

  • That semi-colon in the path string may not work on non-Windows systems. I’m not sure if Eclipse is smart enough to fix that.
  • In the past, I’ve had trouble with the case of environment variables and Ant.

Also, I believe that an alternate solution to this problem is to register the JDK in Eclipse. This is ok, but it’s nice to not require it.

Categories: eclipse, java Tags: , ,

Convenient or Minimal API?

December 4th, 2008 1 comment

There is always a tension in API design between convenience and minimality. I’ve found this especially true when defining interfaces.

What’s Right

As Scott Meyers has written, pretty convincingly I think, it is preferable to keep an interface minimal and then provide a library of non-member helper functions that provide common operations. A classic example of this approach is Java’s Collections class. It provides a bunch of useful methods for working with the collections framework while not bloating up the collections interfaces.

The primary benefit of this approach is improved encapsulation. When you think of encapsulation as minimizing the amount of code affected by a change in the implementation of the class, this makes perfect sense. C++’s std::string (and Java’s for that matter) is the mother of violations of this rule of thumb. It has a ton of methods, many of which can be trivially implemented in terms of other methods. Thus, any change to the internal implementation could require modifications to much more code than necessary. More code changes means greater bug potential.

What’s Feels Good

On the other hand, you want to provide a convenient interface as well. I don’t think it’s unreasonable, especially in the presence of code completion tools, to expect to hit “dot” and get a list of the operations you can perform on that type. That is, without having to know about some other set of utility functions. This is how common functions get reinvented over and over.

It’s interesting that this hang up seems less prominent in the world of dynamic languages. In Ruby, coders have no issues with adding new methods to a class, even at run-time. For statically typed languages, I think that C# might have the right idea with Extension Methods. It’s basically syntactic sugar for static helper methods that makes them look like normal object methods. As usual, Lisp (CLOS, actually) seems to really have it right. ALL methods are “static helpers”, so called generic functions. Note that I realize there’s a major distinction here! Generic functions are polymorphic on the types of their parameters, while a normal static helper is not. But if you tip your head and squint, I think the there’s a resemblance.

Why I Worry

Anyway, I’m working in Java mostly, which has proven to be much less sprightly than C# lately. Way it goes. I have to make this decision. It’s particularly annoying to have to make this decision when the API is a Java interface. In this case, implementers have to implement the convenience function, which means a chance of getting it wrong if the interface is implemented frequently, or at least changing the semantics slightly.

The example that got me thinking of all of this is the InputOutput interface from jsoar. It looks something like this:

public interface InputOutput
    // ...

    Wme addInputWme(Identifier id, Symbol attr, Symbol value);
    Wme removeInputWme(Wme wme);
    Wme updateInputWme(Wme wme, Symbol newValue);

    // ...

The tricky part is that updateInputWme method. A Soar WME is immutable, i.e. it can’t be changed after it’s been created. Thus, to update the value of WME (say to change the x coordinate of a simulation object), you have to remove the WME and replace it with a new one with the value changed. That’s what updateInputWme does, and thus it’s implemented completely in terms of the public interface:

public Wme updateInputWme(Wme wme, Symbol newValue)
    return addInputWme(wme.getId(), wme.getAttribute(), newValue);

This is such a basic operation that people expect, I decided not to make it a helper… but I had to think about it for a minute, which is pretty annoying.

When Right Is Wrong

There’s (at least) one case I can think of where you can’t use static helper methods. This is when dealing with concurrency using the private lock idiom. If the helper performs several operations and it would violate the helper’s contract for other code to see the object in the intermediate states created by those operations, the helper must be implemented as a normal method, holding the private lock as necessary.

An example of such a method is ConcurrentHashMap.putIfAbsent() from the Java concurrency library. If it was implemented naively with get() and put(), it’s thread-safety guarantees would be violated.

In cases like this, you have to just suck it up and pile the methods into the interface and worry about Scott Meyers later.

Categories: software engineering Tags:

Beware of Case-Sensitive Environment Variables in ANT

December 2nd, 2008 No comments

Today I was bitten by a kind of annoying feature of ANT. Sadly, a project I’m working relies on a Swig-generated wrapper for a C++ library, in particular the SML client libraries for Soar. So, it’s important to have all the DLLs in the right spot to avoid link exceptions. The usual, and I think most straightforward, solution is to ensure that the DLLs are on the system path. To this end, I can make a simple bat scripts (or shell script) that sets up the environment and then invokes Java.

However, I try to be good and write unit tests for this stuff too which means that the environment has to be set up correctly for JUnit too, even when invoked from ANT. So, in ANT, how do I extend the system path for the JUnit task while not clobbering the path that’s already there? I did a little hunting around and found the nice <env> tag which is documented in the exec task. Lo and behold, the second example is exactly what I want to do. Here is what I did, translated for junit:

<property environment="env"/>
<junit fork="true" ... >
  <env key="PATH" path="${env.PATH}:${basedir}/vendor/soar-8.6.3/bin"/>

Pretty straightforward, right. But, of course it didn’t work right away. After a little more hunting, I figure out the problem is with that <property environment … > tag. It turns out there are a few things going on here:

  • On Windows, environment variable names are case insensitive
  • Properties in ANT are case sensitive
  • The case of the PATH environment variable in Windows appears to be totally random

These factors conspired against me. In my case, my path environment variable was spelled “Path” with a capital Pee. Changing the ANT file to use this spelling fixed everything. And yes, this limitation is documented in the documentation for the property task.

The really sad thing is that I got bit by this less than an hour later when I set up the project on Hudson (really cool by the way). In this case, it appears that Hudson (or something else) changes the PATH environment variable to all lower case when invoking ANT. So, now I have a build script that runs great on my machine, but fails on the build machine. I’m sure there’s a better solution, but for expediency, I went with this monstrosity:

<property environment="env"/>
<junit fork="true" ... >
  <env key="PATH" path="${env.path}:${env.Path}:${basedir}/vendor/soar-8.6.3/bin"/>

I wonder if this will ever come back to haunt me… Yet another reason for me to eliminate this C++ dependency.

Categories: java, software engineering Tags: , ,