Monday, October 27, 2008

Ada library listing

The Ada library contains a lot of interesting libraries, but it is very hard to comprehend names like "a-szfzha.ads" and "a-ztinio.ads" (those two files correspond to Ada.Strings.Wide_Wide_Fixed.Wide_Wide_Hash and Ada.Wide_Wide_Text_IO.Integer_IO).

The following script will make any directory with .ads files much easier to understand. The output is HTML with one column of file names next to a column of the corresponding package name. Redirect it with "> GNAT_Library.html" for output as a file, or you can even redirect it to a browser. I imagine it would be possible to leverage AdaBrowse in this, but that is a project for a later date. Save this to a file, make it executable, and try it out.


Thanks to caracal of #ada @ freenode for completing the HTML and finishing off the parts for generic packages.

I've left the line for Debian's GNAT because that is what I use. This will work on any directory with .ads files in it.

Sunday, June 8, 2008

Speak Ada! Speak!

Text to speech was a valuable tool in a recent class. It made for great debugging information when no visual interaction was possible, and was a helpful feature in the final application. The text to speech system used there was easy to program dedicated hardware. I may be a hardware junkie, but I'm pretty sure I can get by with a software solution.

After a bit of searching around I found Festival. Festival is available as a package for Linux and seems to be pretty mature. I installed festival and read through a tutorial on it. Some reading needed to be done to understand the parameters in GNAT.OS_Lib. After a bit of examining the code it made sense.

First needed to make a script to translate the way Ada's spawning passes parameters into the way Festival takes input.
(festival_helper.sh)

Simple shell script that needs to be saved and be made executable. "./festival_helper.sh "your text goes here"" will call festival.

The Ada part was a bit kludgy with how the parameters were passed, but nothing too bad.
(festival_demo.adb)


I'd like to think that is pretty straightforward...

I doubt this is the most efficient way. Some magic with bash scripts could easily be more efficient. Instead of re-spawning Festival with every call another possible approach might be to use Festival's --server and send over the network. I've dabbled with interfacing directly with the Festival C++ API, but my understanding of C++ compiler messages is limited. I had to do some shoehorning to get a gcc on this system, so that might be a problem as well.

It works, its simple, I'm happy.

Saturday, January 12, 2008

Avoiding rendezvous to delay starting of tasks

A common problem I encounter is that something needs to be done before task can start running. I've always solved that by using a Start entry on tasks and calling that entry when it is safe for them to start. Calling entries works fine but I'd like to think the following is a bit neater:
(declare_task_test.adb)

Instead of delaying when the tasks start, delay when the tasks are declared.

Friday, January 11, 2008

Full XML Parser for Wikis

You may want a primer on parsing XML with Ada. That is an... okay... example, but better than many Ada packages have on the web. The following is a more complete, but more complicated example. In XML/Ada you supply an object to the parser that provides the rules for how to parse the document. It is a pretty easy to work with and extend. This code was some of the earliest I started writing in this whole adventure. I waited to finalize it until now so I could completely figure out how the client code would work with it.

This is the client to the XML/Ada library and makes use of the parser object posted below: (test_wiki_parser.adb)

This uses the Input_Sources.Large_File posted earlier. This entire program boils down to:
  • Tell parser what to do
  • Open file
  • Parse file
  • Close file

(wiki_reader.ads)

(wiki_reader.adb)

I tried passing the two processes (Document_Process & Collection_Process) at the declaration of Wiki_Parser, but evidently Ada does not like access types being passed there (and a compiler bug). I could possibly wrap them in a record and pass them through that also. I think this way is not the preferred way, but it works.

Yay closures.

This combo fully parses the new 14 GB version of Wikipedia in 25 minutes. This is not the exact version that will be used in the final product. The Full_Document type will be declared elsewhere. Other than that all the code above will be included.

Friday, January 4, 2008

The difference a review can make

One of my previous classes involved building a search engine for a document. The professor demonstrated different ways of indexing documents. One that really jumped out at me was ngramming. Ada's ability to do Enumeration_Type'Value on an equivalently sized string and return Enumeration_Type really made this attractive to me. Seemed like a pretty straightforward idea and should be easy to implement.

It wasn't.

I wrote a script to define the ngram type of all combinations of 3 letters. The script output a file "aaa, aab, aac, .. zzx, zzy, zzz". There are 17576 combinations of 3 letters. Ada has 12 reserved words that are 3 letters so I had to code around that. My once neat "abs" became "absa" and I had to add a case statement with 7 choices and at least one if-else in every choice of that case. Yipe. What was a very close representation of the data just became a burden. The conversion function and the accompanying type declaration make for a nearly 90 KB file. Ugh.

Using a script to write the data type for me should have been a clue.

I've been looking over that code recently and came up with a much neater solution. Here is the declaration and test of what should be a nearly drop in replacement and is probably faster to boot:
(trigram_test.adb)

In comparison the original enumeration type declaration took 704 lines. This new type might be not as representative, but I never used the representation in the old implementation. This is by far easier to understand and maintain.

Tuesday, January 1, 2008

Plans gone awry

An unforeseen obstacle is that the file IO included with XML/Ada is not the most robust solution. The included file input source works by reading the entire xml file to a buffer at once. The largest file it can index is 2 GB. That is not going to work for the 12 GB file I have.

My first thought was to read large blocks with Sequential_IO for the majority of the file and then use progressively smaller calls to Direct_IO for the remainder. The size of the calls to Direct_IO would be determined by the leq_pot demonstration posted earlier. I decided against this approach because of the complicated buffer and file slicing. I would've ended up with a massive development headache and then needed to run more extensive tests. Best tests are tests you don't have to run.

Stream_IO is one of the more powerful IO packages in Ada. I had toyed with the idea of using Stream_IO to read the data, then using Unchecked Conversion to change the data to the proper type. I'm not a fan of Unchecked Conversion, and I would need to have multiple copies of the same data in memory. I considered that ugly and moved on. The second attempt was to make a buffered Stream_IO package. I wrote the package pretty quickly and started implementing it. After doing more investigation I discovered that the decoding procedure for Unicode characters requires that the buffer be internal to the Input_Source and not in the IO package. Back to the drawing board.

After visiting the drawing board this time I felt much better about my solution. Using Stream_IO, buffering logic from the previous attempt, and a procedure borrowed from the UTF-8 Unicode encoding I managed to make an object that reads a file using Stream_IO and converts Stream_Elements directly to UTF-8 characters. Additional encoding schemes shouldn't be hard to do, but I'll just stick with UTF-8 for now. This is what I came up with:

(input_source-large_file.ads)

(input_sources-large_file.adb)

The time required to parse a file using this input source is much more consistent. For the 390 MB Uncyclopedia the file source included with XML/Ada would take anywhere from 1:24 to 2:01. What I wrote it is always between 1:12 and 1:14. Part of the speedup is probably due to the inlining of the UTF-8 conversion. The more consistent times is likely due to finer grained IO. Using a 16 MB buffer CPU usage on a 2.4 Ghz Core 2 hovers between 90% and 100% utilization of a single core and 18 MB of RAM is consumed. A parser with high standards for what constitutes a document processed the 12 GB wikipedia and counted 1074662 articles in 23 minutes.

Another input source to verify this against with would be nice. I've verified this input source with the smaller Uncyclopedia. For all I know this source is faulty after the first 390 MB. Then again 390 MB worth of data is probably a high enough standard.

Lessons taught:
  • This is the first tagged type extension, outside of a book exercise, that I have done.
  • Tagged types cannot have default discriminants.
  • Don't expect to deal with large files easily.
  • Simple is better.

What I'd like to continue with:
  • Make the package handle more than UTF-8.
  • Find a more optimal buffer size.
  • Can another thread speed up execution or is it IO bound?