A common problem I encounter is that something needs to be done before task can start running. I've always solved that by using a Start entry on tasks and calling that entry when it is safe for them to start. Calling entries works fine but I'd like to think the following is a bit neater:
(declare_task_test.adb)
Instead of delaying when the tasks start, delay when the tasks are declared.
Saturday, January 12, 2008
Friday, January 11, 2008
Full XML Parser for Wikis
You may want a primer on parsing XML with Ada. That is an... okay... example, but better than many Ada packages have on the web. The following is a more complete, but more complicated example.
In XML/Ada you supply an object to the parser that provides the rules for how to parse the document. It is a pretty easy to work with and extend. This code was some of the earliest I started writing in this whole adventure. I waited to finalize it until now so I could completely figure out how the client code would work with it.
This is the client to the XML/Ada library and makes use of the parser object posted below: (test_wiki_parser.adb)
This uses the Input_Sources.Large_File posted earlier. This entire program boils down to:
(wiki_reader.ads)
(wiki_reader.adb)
I tried passing the two processes (Document_Process & Collection_Process) at the declaration of Wiki_Parser, but evidently Ada does not like access types being passed there (and a compiler bug). I could possibly wrap them in a record and pass them through that also. I think this way is not the preferred way, but it works.
Yay closures.
This combo fully parses the new 14 GB version of Wikipedia in 25 minutes. This is not the exact version that will be used in the final product. The Full_Document type will be declared elsewhere. Other than that all the code above will be included.
This is the client to the XML/Ada library and makes use of the parser object posted below: (test_wiki_parser.adb)
This uses the Input_Sources.Large_File posted earlier. This entire program boils down to:
- Tell parser what to do
- Open file
- Parse file
- Close file
(wiki_reader.ads)
(wiki_reader.adb)
I tried passing the two processes (Document_Process & Collection_Process) at the declaration of Wiki_Parser, but evidently Ada does not like access types being passed there (and a compiler bug). I could possibly wrap them in a record and pass them through that also. I think this way is not the preferred way, but it works.
Yay closures.
This combo fully parses the new 14 GB version of Wikipedia in 25 minutes. This is not the exact version that will be used in the final product. The Full_Document type will be declared elsewhere. Other than that all the code above will be included.
Friday, January 4, 2008
The difference a review can make
One of my previous classes involved building a search engine for a document. The professor demonstrated different ways of indexing documents. One that really jumped out at me was ngramming. Ada's ability to do Enumeration_Type'Value on an equivalently sized string and return Enumeration_Type really made this attractive to me. Seemed like a pretty straightforward idea and should be easy to implement.
It wasn't.
I wrote a script to define the ngram type of all combinations of 3 letters. The script output a file "aaa, aab, aac, .. zzx, zzy, zzz". There are 17576 combinations of 3 letters. Ada has 12 reserved words that are 3 letters so I had to code around that. My once neat "abs" became "absa" and I had to add a case statement with 7 choices and at least one if-else in every choice of that case. Yipe. What was a very close representation of the data just became a burden. The conversion function and the accompanying type declaration make for a nearly 90 KB file. Ugh.
Using a script to write the data type for me should have been a clue.
I've been looking over that code recently and came up with a much neater solution. Here is the declaration and test of what should be a nearly drop in replacement and is probably faster to boot:
(trigram_test.adb)
In comparison the original enumeration type declaration took 704 lines. This new type might be not as representative, but I never used the representation in the old implementation. This is by far easier to understand and maintain.
It wasn't.
I wrote a script to define the ngram type of all combinations of 3 letters. The script output a file "aaa, aab, aac, .. zzx, zzy, zzz". There are 17576 combinations of 3 letters. Ada has 12 reserved words that are 3 letters so I had to code around that. My once neat "abs" became "absa" and I had to add a case statement with 7 choices and at least one if-else in every choice of that case. Yipe. What was a very close representation of the data just became a burden. The conversion function and the accompanying type declaration make for a nearly 90 KB file. Ugh.
Using a script to write the data type for me should have been a clue.
I've been looking over that code recently and came up with a much neater solution. Here is the declaration and test of what should be a nearly drop in replacement and is probably faster to boot:
(trigram_test.adb)
In comparison the original enumeration type declaration took 704 lines. This new type might be not as representative, but I never used the representation in the old implementation. This is by far easier to understand and maintain.
Tuesday, January 1, 2008
Plans gone awry
An unforeseen obstacle is that the file IO included with XML/Ada is not the most robust solution. The included file input source works by reading the entire xml file to a buffer at once. The largest file it can index is 2 GB. That is not going to work for the 12 GB file I have.
My first thought was to read large blocks with Sequential_IO for the majority of the file and then use progressively smaller calls to Direct_IO for the remainder. The size of the calls to Direct_IO would be determined by the leq_pot demonstration posted earlier. I decided against this approach because of the complicated buffer and file slicing. I would've ended up with a massive development headache and then needed to run more extensive tests. Best tests are tests you don't have to run.
Stream_IO is one of the more powerful IO packages in Ada. I had toyed with the idea of using Stream_IO to read the data, then using Unchecked Conversion to change the data to the proper type. I'm not a fan of Unchecked Conversion, and I would need to have multiple copies of the same data in memory. I considered that ugly and moved on. The second attempt was to make a buffered Stream_IO package. I wrote the package pretty quickly and started implementing it. After doing more investigation I discovered that the decoding procedure for Unicode characters requires that the buffer be internal to the Input_Source and not in the IO package. Back to the drawing board.
After visiting the drawing board this time I felt much better about my solution. Using Stream_IO, buffering logic from the previous attempt, and a procedure borrowed from the UTF-8 Unicode encoding I managed to make an object that reads a file using Stream_IO and converts Stream_Elements directly to UTF-8 characters. Additional encoding schemes shouldn't be hard to do, but I'll just stick with UTF-8 for now. This is what I came up with:
(input_source-large_file.ads)
(input_sources-large_file.adb)
The time required to parse a file using this input source is much more consistent. For the 390 MB Uncyclopedia the file source included with XML/Ada would take anywhere from 1:24 to 2:01. What I wrote it is always between 1:12 and 1:14. Part of the speedup is probably due to the inlining of the UTF-8 conversion. The more consistent times is likely due to finer grained IO. Using a 16 MB buffer CPU usage on a 2.4 Ghz Core 2 hovers between 90% and 100% utilization of a single core and 18 MB of RAM is consumed. A parser with high standards for what constitutes a document processed the 12 GB wikipedia and counted 1074662 articles in 23 minutes.
Another input source to verify this against with would be nice. I've verified this input source with the smaller Uncyclopedia. For all I know this source is faulty after the first 390 MB. Then again 390 MB worth of data is probably a high enough standard.
Lessons taught:
What I'd like to continue with:
My first thought was to read large blocks with Sequential_IO for the majority of the file and then use progressively smaller calls to Direct_IO for the remainder. The size of the calls to Direct_IO would be determined by the leq_pot demonstration posted earlier. I decided against this approach because of the complicated buffer and file slicing. I would've ended up with a massive development headache and then needed to run more extensive tests. Best tests are tests you don't have to run.
Stream_IO is one of the more powerful IO packages in Ada. I had toyed with the idea of using Stream_IO to read the data, then using Unchecked Conversion to change the data to the proper type. I'm not a fan of Unchecked Conversion, and I would need to have multiple copies of the same data in memory. I considered that ugly and moved on. The second attempt was to make a buffered Stream_IO package. I wrote the package pretty quickly and started implementing it. After doing more investigation I discovered that the decoding procedure for Unicode characters requires that the buffer be internal to the Input_Source and not in the IO package. Back to the drawing board.
After visiting the drawing board this time I felt much better about my solution. Using Stream_IO, buffering logic from the previous attempt, and a procedure borrowed from the UTF-8 Unicode encoding I managed to make an object that reads a file using Stream_IO and converts Stream_Elements directly to UTF-8 characters. Additional encoding schemes shouldn't be hard to do, but I'll just stick with UTF-8 for now. This is what I came up with:
(input_source-large_file.ads)
(input_sources-large_file.adb)
The time required to parse a file using this input source is much more consistent. For the 390 MB Uncyclopedia the file source included with XML/Ada would take anywhere from 1:24 to 2:01. What I wrote it is always between 1:12 and 1:14. Part of the speedup is probably due to the inlining of the UTF-8 conversion. The more consistent times is likely due to finer grained IO. Using a 16 MB buffer CPU usage on a 2.4 Ghz Core 2 hovers between 90% and 100% utilization of a single core and 18 MB of RAM is consumed. A parser with high standards for what constitutes a document processed the 12 GB wikipedia and counted 1074662 articles in 23 minutes.
Another input source to verify this against with would be nice. I've verified this input source with the smaller Uncyclopedia. For all I know this source is faulty after the first 390 MB. Then again 390 MB worth of data is probably a high enough standard.
Lessons taught:
- This is the first tagged type extension, outside of a book exercise, that I have done.
- Tagged types cannot have default discriminants.
- Don't expect to deal with large files easily.
- Simple is better.
What I'd like to continue with:
- Make the package handle more than UTF-8.
- Find a more optimal buffer size.
- Can another thread speed up execution or is it IO bound?
Subscribe to:
Posts (Atom)