Wednesday, December 14, 2011

Uncharted Waters: Extending a Foreign System

Once again, it is time for team JCEV to tackle another CLI project for the Hale Aloha towers.  However, this time we did not design the system from scratch.  Instead, we took control of the hale-aloha-cli-kmj project that I previously reviewed and extended it by adding three new commands and updating the documentation.  So how did it turn out?  Well, as per usual I shall evaluate this new endeavor in the context of the three prime directives of open source software.

Prime Directive #1: Does the system accomplish a useful task?
To accomplish this prime directive, the system should fully implement the three commands specified.  The first command is set-baseline which records the energy used by a tower or lounge over a specified day in one hour periods and it should default to yesterday if no date is given.  In this case, the system successfully retrieves this data and stores the information in a Baseline object for later use in both cases.

set-baseline command with optional date argument.
set-baseline without the optional date argument.
The second command, monitor-power, should periodically retrieve the current power consumption of the given tower at the given interval.  The interval argument is optional and the interval between checks should be 10 seconds if not specified otherwise.  In addition, the command should stop execution when the user presses a key like enter.  This version of the system contains of the functionality as it periodically prints the results as expected.  The only small issues are that the command will not stop unless the enter button is pushed (i.e. the s and the a in the image below did not stop the command's execution) and that terminating execution causes an invalid command error.  These do not really affect the functionality though and hence this command is in working order.

The monitor-power command in action.
The third and final command is monitor-goal.  This command checks the current power consumption of the given tower or lounge and compares it to the baseline which must be set before running this command.  If the current power consumption has been reduced by the specified goal percentage or more, than it prints the goal has been met.  Otherwise, it indicates that the goal was not met.  The monitor-goal command does this periodically like the monitor-power command and should stop execution when the user presses a button like enter.  This implementation of monitor-goal suffers from the same problem as the monitor-power command as it requires the user to press enter to exit the command, but once the user presses enter, the command exits as expected.  As a result, all three commands work to accomplish a useful task and this system fulfills the first prime directive.

The monitor-goal command successfully executing after the baseline has been set.

Prime Directive #2: Can an external user successfully install and use the system?
I believe that this system fulfills this prime directive as well.  The original project was a little lacking in the user documentation due to a sparse home page and a few key errors in the UserGuide, but these have been fixed in the latest version.  The home page now sports many screen shots of example executions that users can reference and the new UserGuide removes some of the misleading instructions that afflicted the original guide (i.e. removed the unsupported sources that were previously listed).  The system is also very easy to install as there is a clearly labeled distribution to be downloaded on the website which contains an executable jar file that can be run without any compilation.  Consequently, I believe that the updated documentation should make this system easy to install and use, even for an external user.

Prime Directive #3: Can an external developer successfully understand and enhance the system?
Essentially, this entire exercise was a test of the third prime directive as my team (JCEV) had to adopt this code-base, understand how it worked, and extend it.  While we were successful in doing so, it was not an ideal experience as we had to figure out how to extend the existing system on our own.  The DevelopersGuide gave a general overview as to how the system worked, but the specifics of what changes were needed when adding a new command were nowhere to be found.  As a result, we had to skim over the code in the CommandParser class to figure out exactly what these changes were before we could add our new commands.  Of course, this was a pain and we decided to pardon any future developers from this experience by listing the required changes in the updated DevelopersGuide.  The DevelopersGuide was also updated with the coding standards that should be used in the project and some instructions detailing how to generate JavaDocs to further assist any future developers.  These changes should make it much easier for an external developer to successfully understand and enhance the hale-aloha-cli-kmj system and thus I think that it now fulfills the third and final prime directive.

Conclusion
Overall, this was an interesting experience in software engineering.  While we managed to implement all of the functionality and produced what I would consider high-quality software, it was not without its issues.  These issues mostly stemmed from trying to extend an unknown code-base with rather poor documentation as a good amount of brain power and time was spent analyzing the existing source code to figure out how to fit our new features in.  This was probably the first time in my programming experience that I had to learn what someone else's code did by reading it and it definitely emphasized the importance of good documentation.  Finally, this experience also made me realize just how essential good group members are.  When everyone did their part, the workload becomes much more bearable and the project can progress smoothly.  We did not finish with a lot of extra time like our own CLI project, but we did have enough time to double check everything to give ourselves the piece of mind that we did our best and that the product is up to our standards.  As a result, I would like to thank Eldon Visitacion and Jordan Takayama for their hard work throughout this semester.  It has been a pleasure working with them in group JCEV and I am looking forward to working with quality individuals like them in the future.

Friday, December 2, 2011

A Technical Review

Collaboration is the name of the game when it comes to Software Engineering and an important part of the collaboration process is performing technical reviews on other developers' code.  Here we shall evaluate a group's Hale Aloha CLI project where they attempted to create an open source Java command line interface that would allow users to query a server for various energy / power related information.  This was to be done using the WattDepot library which simplifies the task of connecting to the server and retrieving the data so that the developer can focus on processing that data.  As a result, we will not focus so much on the code itself, but take a look at how "good" this project is by comparing it to the three prime directives of open source software.

Prime Directive #1: Does the system accomplish a useful task?
The first prime directive is pretty self explanatory, does the system do what it is supposed to do?  Here I tested the various commands that the system was supposed to implement and checked if they worked as I expected them to.  This initial release of the system is expected to implement 6 commands: help (return a list of commands), quit (exit the CLI), current-power (returns the current power usage of given tower/lounge in kW), daily-energy (returns energy used by the given tower/lounge on given date), energy-since (returns energy used by given tower/lounge from given date to now), and rank-towers (rank towers from least to most energy used on given interval of days).  All six of these commands are present in the system and for the most part are functional, but there are also a few problems.

The first major problem is that the commands do not work with all of the advertised source values.  The UserGuide lists lounge and telco sources (i.e. "Lokelani-04-telco") as valid, yet the system rejects them claiming that they are not valid.  It works fine with all of the tower  (i.e. "Lokelani") and lounge (i.e. "Lokelani-A") aggregates though.

The system failing to recognize a telco source as a valid source.

Another problem is that help command breaks if the jar file is moved to a different directory.  The current implementation of the help command references an external text file to retrieve its data with a hard-coded path and if the jar is moved outside of the distribution directory, the help command fails to work.  This is a problem since I think that the executable jar file should be self-contained and fully usable from anywhere in the file system.

The help command failing when the jar file is moved.

Yet another issue is that invoking the daily-energy command with today's date does not work.  It does not work as the command tries to retrieve energy data between the start of given date (12:00 am today) and the start of the next day (12:00 am tomorrow) which is an invalid range since the end of the range is in the future.  While the project specifications does not explicitly state whether today is a valid input for daily-energy or not, I expected the command to retrieve data from the start of today until the time of the latest recorded sensor data instead.

The daily-energy command failing to work when given today's date.

Finally, rank-towers does not work when it is given the same day as the start and end dates.  Instead, it will print out just one line for the Lokelani tower with a value of 0.  This is not good as giving the same date should either be considered a valid output and print results for all four towers or be flagged as invalid and raise an error message to inform the user.  Thus, I believe that the current behavior of the rank-towers command when given the same day as the start and end dates is unacceptable as it does not indicate if that input is valid or not and prints an incomplete output.

The strange output from rank-towers when using the same day as both the start and end dates.

Overall, most of the functionality is there.  Outside of these cases, all of the expected commands are present and work as expected by printing out results when given valid inputs and printing error messages when given invalid inputs.  Consequently, I believe that this system somewhat fulfills the first prime directive as long as the user is careful about which inputs is used and does not move the jar file.

Prime Directive #2:  Can an external user successfully install and use the system?
This directive tests the documentation of the project from the view of an external user.  Unfortunately, the home page does not tell the user much about the system and lacks any sample input and output.  Furthermore, the UserGuide does not tell the user where to download the distribution, how to "install" the system (i.e. extract it from the zip archive), and the command to invoke the executable jar file is wrong (the name of the jar is "hale-aloha-cli-kmj.jar" not "hale-aloha-cli.jar").  On the other hand, the UserGuide does do a fairly good job at explaining which commands and arguments can be used (thought there is that error as mentioned in the the previous prime directive), the distribution is labeled with a version number to distinguish it from past versions, and the distribution does contain an executable jar that does not require any building or compilation.  However, the aforementioned problems are rather major so the system only partially fulfills this prime directive.

Prime Directive #3: Can an external developer successfully understand and enhance the system?
The third and final prime directive once again tests the project's documentation, but from a developer's view.  A major document for this directive is the DevelopersGuide which should detail how an external developer can build the system and how to extend it.  While it does detail how to build the system using Ant, it does not tell the developer how to generate the system's JavaDocs.  It also does not mention that new functionality must be accompanied by new JUnit test cases as it only mentions that additions must pass CheckStyle, PMD, and FindBugs.  In addition, the DevelopersGuide makes no mention any standards that are being followed.  The instructions for extending the system are rather vague too as it does not explain exactly what to modify in the CommandParser class to add a new command and makes does not describe how to add any new commands to the help command.  Finally, the guide mentions that the project is being managed in an issue driven fashion, but does not explain exactly what that means for developers (i.e. make a new issue for every new command / functionality).

Outside of the DevelopersGuide, JavaDocs and source code comments can help external developers understand the system.  Overall, the JavaDocs do a decent job of explaining what the functions do and what the expected arguments are even though there are a few misspellings (i.e. Comman's instead of Command's in the Command interface's printResult method's comment), fields are not commented, and the overview.html does not fit match the project.  Conversely, the source code comments are a little sketchy.  Some source files like EnergySince.java and HaleAlohaCli.java include inline comments to tell readers what the code is doing at a glance while others like CommandParser.java have no such comments, leaving external developers to figure out the code on their own.  Another problem is that most of the classes do not support information hiding.  Ideally, all fields in a class should be private and only be accessible through getter and setter methods, but several classes like EnergySince and HelpCommand declare their fields without the private keyword.  This makes them protected by default and allows any class within the package to manipulate the fields which gives external developers the ability to put objects into possibly invalid states and unknowingly break the system.  All in all, this project does not satisfy the third and final prime directive very well as it fails to explain all of the expectations that are in place when extending the system, does not consistently support information hiding, and leaves a few critical things up to the developer to figure out on their own.

Other Expectations
In addition to fulfilling the three prime directives of open source software, there some additional requirements that the system should meet.  One such requirement is that the system should be well tested so that it is easy to find out if enhancements broke pre-existing pieces of the system.  To be well tested, the developers should have created JUnit test cases to check most of the functionality of the system while executing a majority of the code for good code coverage.  Unfortunately, this system fails to do this as it includes only one real JUnit test case.  While there are other "test cases," they do not use JUnit and have a .txt extension so they are not automatically run by Ant when the system is verified.   Additionally, the one JUnit test case that is present only tests three out of the five methods in the class that it tests.  The result is a barely system that has about 18% of its code tested.  Hence, this system does not meet that requirement as there are virtually no tests to determine if any new additions broke existing components of the system or not.

Another expectation was the use of issue driven project management (IDPM) to evenly distribute the work between group members.  In IDPM, members meet every few days to divide the current tasks into small work units called issues which should be tracked on the Issues page of the project.  In addition, most commits to the systems should have an associated issue in its commit log to help explain why that change was made.  By looking at the Issues page it is quite clear who was responsible for what and it shows that the work load was not exactly even as Micah had 7 issues to work on while Richard and Jesse only had 4 issues.  They did not do a very good job at linking their commits to their issues either as only 19 out of 26 commits (~73%) are linked with an issue.  Some of the changes were minor though and can be excused, but that would still put them below the desired 90% of commits associated with an issue.  Therefore, despite their efforts, this group did not quite make the best use out of IDPM as they worked on this project.

The final expectation is the use of continuous integration.  Continuous integration tools attempt periodically to build and test the system (i.e. after every commit) to ensure that the system stays in an acceptable state.  If it fails to pass these tests, the project members are alerted so that they can find and fix the problem.  This project uses a Jenkins server which can be found here as mentioned in the project's DevelopersGuide.  By looking at the Jenkins server, we can get an idea of how development of the project progressed.  There are a total of 9 failed builds, but 5 of them were a result of the server used by the system being down and was out of the group's control.  The other 4 failed builds were promptly fixed though with an interval of five hours being the longest the system was in a failed state which is not too bad considering that this interval was in the middle of the day when the developers were probably busy with classes.  The rate of commits seems rather good too.  Between 11/10 and 11/15 and between 11/23 and 11/26 there were no commits made which implies that no significant work was completed in these time frames.  This equates to about one week of lost time which is definitely concerning, but the rate otherwise seemed quite consistent with a couple of commits every day or two.  Consequently, it looks like this project made good use out of the available continuous integration tools.

The build history of the project as found on the Jenkins server.

Conclusion
Overall, this project has some problems.  It fails to completely satisfy all three prime directives of open source software due to incomplete documentation and weak testing and it also seems to fall short of some of the other expectations from this project as the weak testing lead to poor test coverage and they were not quite able to use issue driven project management optimally.  Despite these issues, their use of continuous integration was good and the overall system works in most cases.  In conclusion, this project could use a few tweaks, but reviewing it as an outsider has been an interesting and enlightening look into the software development processes of other developers.

Monday, November 28, 2011

A CLI Project With JCEV Using IDPM

Acronyms galore!  After learning about various software development tools and practices throughout this semester, I finally had a chance to apply them to a real group project.  This project involved creating a command line interface (CLI) which can be used to perform specific queries on the WattDepot server collecting data from the Hale Aloha residence towers.  We did this project in groups of three and used Issue Driven Project Management (IDPM) to keep the work flow moving smoothly.  Here I shall describe the process my group used and the resulting hale-aloha-cli-jcev system.

What is IDPM
First off, what is this issue driven project management?  Issue driven project management works by holding meetings often (every couple of days) to divide the current tasks into small work units that would take a day or two to complete.  These work units are called issues and are distributed between the group members.  The main principle behind IDPM is to consistently make, work on, and complete these issues so that every member knows what to work on and what to do next.  This keeps the flow of work consistent and helps to keep the project moving ahead smoothly and efficiently.  We implemented this methodology using the Google Project Hosting website which conveniently has an "Issues" page where we could manage our tasks.  We made rather extensive use of this page and ended up making more than 40 issues throughout the course of the project which aided us in getting everything done in a timely fashion.

Our project's issues page near the end of the project.
The CLI
So how does the project itself work?  The system itself is comprised of three main components.  The first is the main control class which runs the command line interface and grabs the user's input.  The second is a processor class which takes the user's input and tries to invoke the command that the user wants to run.  The third and final part contains the various commands that the user can invoke.  The main idea behind this approach is that each component is self contained and can treat each of the other components as a "black-box" as long as the developer knows what will be passed to the component he is working on.  For example, someone implementing an new command does not really need to know how the control class or the processor works as long as he understands that he should implement the provided Command interface and knows what will be passed to his command as explained in the interface documentation.  This makes the system somewhat modular to make it easy for developers to add to it.  The current release version implements all of the expected functionality and includes a total of six working commands: help (return a list of commands), quit (exit the CLI), current-power (returns the current power usage of given tower/lounge in kW), daily-energy (returns energy used by the given tower/lounge on given date), energy-since (returns energy used by given tower/lounge from given date to now), and rank-towers (rank towers from least to most energy used on given interval of days).

However, there are a couple quirks.  One major quirk is the quit command which needs communicating with the control class to stop the application.  This is a special case which required the quit command to throw a special exception which would be passed up to the control class as the obvious solution of using System.exit is a bad practice.  This works, but it does not quite fit the intent behind the design as both the control and the processor classes had to be modified to make it work.  Another quirk is the fact that the table generated in the processor class has to be modified to add a new command.  Ideally, the user could just add a new class and the processor could dynamically find it and add it, but we did not have time to implement a reflection-based processor class and resorted to using a hard-coded table.  As a result, anyone adding a new command would need to change the table in the processor class by adding a new put call to the table generation function.

Working with JCEV
As with any group project, you need a group and I had the pleasure of working with Jordan Takayama and Eldon Visitacion in team JCEV.  While it took us a little bit to adjust to each other's coding styles, we easily created a functional system before the due date.  This gave us time to check over each other's code and fix minor errors and inconsistencies like using "kWh" versus "kilowatt-hours".  Everyone also made a JUnit test case for each class with good coverage to ensure that everything is working properly.  As a result, I believe that our system contains quality software that has been thoroughly tested.

Conclusion
Overall, I believe team JCEV effectively used IDPM to create a CLI system.  We made good use out of IDPM to distribute and track the work that needed to be done which allowed us to finish with enough time to perform extensive checks and fix the tiny, non-system breaking errors.  Consequently, I believe that hale-aloha-cli-jcev is a quality software system that has been well tested to ensure that it works.  Not only that, but I have also gained valuable experience by putting all of the software engineering practices to use in an actual group project.  I have learned a lot from this project and I am definitely looking forward to next time!

Tuesday, November 8, 2011

Energizing Exercises: WattDepot Code Katas

If you have been keeping up with this blog, it should come as no surprise that energy-related research and software engineering go hand-in-hand and here we shall have our first glimpse at how these two seemingly unrelated fields can assist each other.  To recap, one way that software engineering can assist energy research is through data collection and analysis and software packages like WattDepot can be used to do just that (and more!).  As with the previous systems, we shall learn the basics of WattDepot through a set of simple exercises or katas to get some hands on experience as we get our feet wet.



Kata 1: SourceListing
Implement a class called SourceListing, whose main() method accepts one argument (the URL of the WattDepot server) and which writes out the URL argument and a list of all sources defined on that server and their descriptions, sorted in alphabetical order by source name.  Use the System.out.format method to provide a nicely formatted list. 
As the first exercise, Kata 1 was relatively simple.  It was made even simpler as the example program essentially does this already.  While the code itself was quick and easy, I was not sure if the sources retrieved from the server were always being retrieved in alphabetical order.  It took a while, but I eventually convinced myself that it was sorted and this uncertainty made exercise take about 25 minutes.


Kata 2: SourceLatency
Implement a class called SourceLatency, whose main() method accepts one argument (the URL of the WattDepot server) and which writes out the URL argument and a list of all sources defined on that server and the number of seconds since data was received for that source, sorted in ascending order by this latency value.  If no data has every been received for that source, indicate that.  Use the System.out.format method to provide a nicely formatted list.
The second exercise was rather straight forward as well.  It took a little bit of data manipulation to calculate the latency, but this was also shown in the example program.  The bulk of the time spent here was spent trying to figure out how to get the values sorted by latency.  After thinking about it for a few minutes I decided to implement a LatencyData class which stores a source name and latency value.  LatencyData implements the Comparable interface and contains methods to make it ordered on the latency value.  As a result, all I had to do was make a bunch of LatencyData objects, throw them into an ArrayList, use Collections.sort to get the results in the right order, and print them.  All of this took about 40 minutes since I had to override several methods for the LatencyData class.  However, I should have made a general sorting class to be used by the later exercises that require data to be sorted on a value other than name.  I chose to implement separate classes for each type of data in the later exercises (i.e. EnergyData and PowerData) and a general class would have saved time and effort.



Kata 3: SourceHierarchy
Implement a class called SourceHierarchy, whose main() method accepts one argument (the URL of the WattDepot server) and which writes out the URL argument and a hierarchical list of all sources defined on that server.  The hierarchy represents the source and subsource relationship between sources.
Here is where things started to get a bit more interesting.  Getting the subsources is rather simple using the getSubSources method, using the resulting object's getHref method, and then getting the subsource's name by retrieving the sub-string after the final '/', but making sure that the indentations were right and that subsources were not printed as top-level sources was a little bit trickier.  To print the subsources, I decided to write a recursive method that prints the name of the top-level source, then calls itself on each subsource with a larger indentation.  This way, it is guaranteed to print the hierarchy properly no matter how many levels of subsources are present.  In addition, the method removes any of the found subsources from the original source list, preventing the issue of them being printed again as top level sources.  All in all, this took about 45 minutes to figure out and implement.


Kata 4: EnergyYesterday
Implement a class called EnergyYesterday, whose main() method accepts one argument (the URL of the WattDepot server) and which writes out the URL argument and a list of all sources defined on that server and the amount of energy in watt-hours consumed by that source during the previous day, sorted in ascending order by watt-hours of consumption.  If no energy has every been consumed by that source, indicate zero.  Use the System.out.format method to provide a nicely formatted list.
Kata 4 was probably the most time consuming of them all.  While it seemed simple at first, getting the time stamps for the beginning and end of yesterday proved to be a little trickier than I thought.  To calculate the day of yesterday, I decided to use the java.util.Calendar class (though in hindsight, it would have been easier just to use the built in Tstamp functions).  I then used this Calendar object to create a string which would be used to make a time stamp to be passed to the getEnergyConsumed function.  However, I neglected to notice that the month field of the Calendar class was actually one less than the traditional numbering system (i.e. January is 0) so my time stamps were invalid.  This caused me to get a lot of BadXmlExceptions as the range I specified was outside of the stored sensor data.  It took me multiple sessions to figure this problem out and this exercise took at least 4 hours to complete.  On the bright side though, I was able to apply the same time stamp generation methods the following katas which made them much easier.


Kata 5: HighestRecordedPowerYesterday
Implement a class called HighestRecordedPowerYesterday, whose main() method accepts one argument (the URL of the WattDepot server) and which writes out the URL argument and a list of all sources defined on that server and the highest recorded power associated with that source during the previous day, sorted in ascending order by watts.  Also indicate the time when that power value was observed. If no power data is associated with that source, indicate that.  Use the System.out.format method to provide a nicely formatted list.
While this one sounded more complex than Kata 4 at first, it was relatively simple since I had the time stamps figured out.  The two main issues here concerned virtual sources (aggregates of non-virtual subsources) and finding the maximum power recorded.  Dealing with virtual sources can be tricky since they do not contain data points themselves and the subsources may not be synchronized.  However, this can be handled by sampling the data at large intervals to account for the discrepancies in synchronization (15 minutes in my case) allowing virtual sources to create data points by aggregating the data of its subsources.  The second issue of finding the maximum power is then easily brute-forced by iterating through all of the resulting data points and simply storing the one with the largest value.  Consequently, this one only took about 35 minutes to complete.


Kata 6: MondayAverageEnergy
Implement a class called MondayAverageEnergy, whose main() method accepts one argument (the URL of the WattDepot server) and which writes out the URL argument and a list of all sources defined on that server and the average energy consumed by that source during the previous two Mondays, sorted in ascending order by watt-hours.  Use the System.out.format method to provide a nicely formatted list.
Surprisingly, the final kata was not too difficult.  After playing the the Calendar class in the previous exercises, it was fairly straightforward to find the previous two Mondays.  The only tricky part was getting the previous week's Monday if today is Sunday or Monday since there would not be a complete data set for that week's Monday.  Other than that, getting the two energy consumed readings and averaging them was very simple.  As a result, this exercise was completed relatively quickly with a time of 25 minutes.




Overall, using these exercises as an introduction to WattDepot has shown me just how useful such tools can be in collecting and analyzing energy data.  With WattDepot, programmers do not have to go out of their way to get the data they want so they can spend their time doing meaningful analysis of that data.  While I did encounter a rather frustrating non-WattDepot related problem as I worked through these katas, the WattDepot API proved to be simple and painless to use.  So if you are interested in doing some energy data collection or analysis, WattDepot seems to be the way to go!

Tuesday, November 1, 2011

Energy in Hawaii and Software Engineering

Introduction
Hawaii is a unique place with its own unique set of problems.  This is especially true when it comes to the issue of electrical energy in terms of both generation and consumption.  So why would this be covered in a software engineering log?  Of course, this is an important issue for the residents of Hawaii to consider, but it is also producing some new and interesting software development opportunities.  Thus, let's take a look at some of the unique issues of Hawaii's energy situation and some of the efforts that are being taken to alleviate them.

Electrical Energy vs. Power
Before we begin, what is electrical energy?  Well, obviously we know that electricity is the thing that comes out of our outlets, but how do we measure it?  It turns out that there are actually two ways to measure electricity: energy and power.  Energy is the thing that makes our appliances run and makes electricity useful.  This is usually measured in kilowatt hours.  But why are our electrical appliances rated according to wattage?  These are actually ratings of power which tells us the rate at which these appliances consume electrical energy.  Consequently, the energy used is a factor of both the power being drawn by the appliance and the time it is running (energy = power * time) and the power is the rate at which the energy is being converted (power = energy / time).  Hence, these two related concepts are much more different than they appear to be at first glance.  For more information and a different explanation, please consult this video.

Issues
Now that we understand energy, what are the issues concerning electrical energy in Hawaii?  Of course, this energy must be generated in some fashion by the electric company and these are typically generated by using non-renewable resources.  While mainland power plants can use these resources, they often opt to use cheaper local resources when they are available.  However, due to Hawaii's unique location and available local resources most of its power generation comes from expensive imported oil and coal making energy costs much higher than on the mainland.  In addition, the mainland states make use of a single massive power grid to spread the workload and increase its efficiency.  On the other hand, Hawaii's geography prevents the creation of a single statewide grid, resulting in the use multiple much smaller and less efficient power stations to supply each island.  Therefore, Hawaii has a rather unique position on the energy front in comparison to other mainland states.

Efforts
With all of these problems, there must be a solution right?  Well look no further as the Hawaii Clean Energy Initiative (HCEI) has laid down what needs to be done to alleviate the aforementioned issues.  The HCEI aims to decrease energy usage by 30% while increasing clean energy generation by 40%.  This would lead to an astounding 70% increase in clean energy and be a huge step in improving Hawaii's energy future.  To do this, residents must increase their energy efficiency by following less wasteful energy usage practices such as keeping your refrigerator in a cool location or replacing your incandescent light bulbs with compact fluorescent light bulbs or Light-Emitting Diodes.  In addition, more energy would be generated through clean and renewable means such as using solar power or using wind turbines to capture the energy of the ever present wind.  As a result, both more efficient energy usage and the incorporation of local renewable energy resources are needed to help solve Hawaii's energy problems.

Software Engineering Opportunities
So where does software engineering fit into all of this?  For one, researchers will need to collect and analyze a lot of data to quantify the effectiveness of their energy efficiency efforts and they will need software tools to perform these tasks.  Consequently, they will need software engineers with energy knowledge to design and create these tools.  Furthermore, incorporating renewable energy resources will require smarter power grid technology.  These new sources must be added and removed intelligently as just throwing more energy into the grid would be wasteful as the non-renewable generators would still be going at normal capacity.   Conversely, removing too many of those generators would cause the entire grid to become overloaded and fail.  As a result, software will be needed to track both power generation and consumption to help incorporate these new energy sources safely and effectively.

Conclusion
Overall, Hawaii is a unique place with its own unique set of energy problems.  To solve these issues, Hawaii must both attempt to improve its energy efficiency and work on tapping the many local renewable resources.  However, both of these tasks will require the aid of software tools, providing software engineers with many research opportunities.  Hopefully you have gained a better understanding of Hawaii's current energy situation and can see how this seemingly unrelated field can be beneficial to the local software engineers.  So save energy and keep green.  Perhaps a software engineering opportunity might manifest itself as a result!

Monday, October 24, 2011

5 Important Software Engineering Practices / Concepts

It's been a good semester of Software Engineering so far and I have learned a lot.  Of course there are too many important tidbits too share in one post, but here I will share 5 points that stuck out to me which will you will hopefully find interesting or useful (plus it will help me review for the upcoming midterm!).

  1. Why should you always override the hashCode method when you override that object's equals method?

    The contract of the hashCode method states that if two objects are equal according to that object's equals method, then calling the hashCode method on the two objects must return the same integer. Since modifying the equals method of the object can change the definition of what makes two object equal, the hashCode method must also be overridden to stay consistent with the new definition.

  2. What are the four properties that any implementation of the equals function should exhibit when comparing two non-null objects?

    1. Reflexivity: x.equals(x) must return true.
    2. Symmetry: x.equals(y) must return the same value as y.equals(x).
    3. Transitivity: If x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) must also return true.
    4. Consistency: x.equals(y) must return the same value over multiple calls to equals given that no information used in the comparison is changed.

  3. Which property of the property Ant tag should you use to make properties representing paths? Why?

    While the value can work, you should use the location property of the property tag because the location property will change the file separators (i.e. '/' or '\') to the one that the operating system uses while the value property will not. Hence, using the location tag allows for better portability as it will change the separators in the paths to match the operating system invoking Ant.

  4. What is the syntax do you use to access the value of a property defined in an Ant build file? For example, how would you access the value of:
    <property name="new.property" value="new />

    Surround the property name with ${ }. So ${new.property} may be used to access the value of the sample property above.

  5. What are the three general categories of test cases?

    1. Acceptance tests: The program achieves and passes some basic requirement. For example, a Robocode robot always wins against a certain sample robot.
    2. Behavioral tests: The implementation actually works as intended. For instance, the Robocode robot actually follows the strategy by moving to the specified locations through out the battle.
    3. Unit tests: Test small self-contained classes or methods in isolation. For example, testing the output of a Robocode robot's fire-control method to ensure that the output matches the expected values without actually running the battles to check the robot's behavior.

And there you have 5 of the software engineering highlights that I have seen so far. Hopefully you have picked up something new and useful and I hope to share more with you as I continue my journey through the wonderful realm of software engineering!

Thursday, October 20, 2011

Group Projects Made Easy - Subversion and Google Project Hosting

Introduction
In software engineering, there are many problems that are just too large for a single developer to tackle in any timely fashion and require multiple software developers to work in tandem. However, this can lead to many problems. For example, how does one ensure that they are working on the latest version of the project or that one's changes are not overwritten by another developer's modifications? These problems are magnified as more and more developers are added to the project, yet large projects with many programmers are common and run successfully without these problems. So how do they do this? The answer is that they use configuration management tools like Subversion. Here I shall share my initial experiences in using the Subversion configuration management tool and the Google Project Hosting website to host my CrazyTracker robot.



Subversion and Google Project Hosting
So we want to collaborate with others on this CrazyTracker robot, but how do we do it? First off, we need to get the tools required for Subversion. There are two main components of Subversion: a client program to communicate with the source code repository and a server to host said repository. I use a Windows operating system so I chose to use the TortoiseSVN client which allows the user to right click a folder and choose various commands from the pop up window. As for the server, we can easily create one by making a new project in Google Project Hosting. Once those are set up, it is just a simple matter of checking out the empty source directory of our new project, adding the CrazyTracker project's files in TortoiseSVN, and committing the changes to upload the the project to the Google Project Hosting server.

Now that the code is up there, we have to add some documentation in the form of wiki pages (i.e. user / developer guides) and add some "committers" to work on the project. Once that is completed, anyone can see the source code and anyone with committer permissions can upload their changes to the repository using Subversion.

Some of the commands available through the TortoiseSVN user interface.
Includes the "Blame" command which shows the last person that modified each line.




Conclusion
Overall, setting up the robocode-fch-crazytracker project on Google Project Hosting was relatively quick, easy, and painless. TortoiseSVN installed with no problems on my Windows computer and Google's simple and intuitive interface made creating the project and its documentation quite painless. If anything, the only gripes I had about the whole process was the fact that I had to check out the empty project to upload the original source files (why can't we add source files when creating the project?) and that the wiki markup language assumes that words with capital letters are links to other wiki pages and puts a "?" link after them. As a result, I had to put a "!" before those words to make those links go away which was a simple fix, but it did get annoying after a while. However, my first experience with Subversion and Google Project Hosting was definitely a good one and I can see how such tools can help with collaboration. Subversion in particular has some very interesting features (for instance, it can tell you who changed which lines of code with the blame function!) and I am definitely looking forward to using these tools for my future group projects. Now that you have seen just how easy it is to set up the tools needed to make group work that much less painful, let's get collaborating!