|Posted by Rowan Powell on July 9, 2015 at 2:15 PM|
As part of my efforts to expand my ablilties, particularly with Java, I developed a small application in 6 hours over two days. The application uses freely available libraries on the internet to create a system that can recognise what a user is saying, check it for key search words and phrases and also search for relavent words around the spoken terms to find out if it's reasonable to flag the user up for further investigation.
The first part of the application is the hardest and least reliable as it's a problem that's been in the process of being solved for a very long time - speech interpretation. For my application I tried using IBM's Speech-To-Text service avaiable through the bluemix platform, but the system proved to be very difficult to use and had little in the way of guides for how to put together an application to make use of it. This was unfortunate because the service appears to be a lot higher quality than others on offer. I ended up working with Voce, which could be implemented as a .Jar library through Eclipse, significantly simpler. Unfortunately the Voce engine has some real issues in reliably detecting words and doesn't automatically use a dictionary.
The second part involved saving a log of what the user said to a file, whichwas fairly trivial as I had created a class to do this for a previous project (The client-server auction system needed a persistant data log).
The third part was trying to find if keywords matched sections of a body of text, this was also a really simple thing to implement as I had also already written a pair of functions that would check if a word could be made by insertions or deletions on the other word which can be found on my gitHub at https://github.com/Xaoka/Public-Demo-Code/blob/master/TextCorrectingContains which I've extended to take into consideration a maxmimum allowed edit distance, here I've just used 2 as a default value, but this could be easily made so that the function checks if the edit distance is less than 25% the length of the word or some other function. In my application I also disregarded words that were less than 4 characters as almost any word could be corrected to them.
The final part of the application required an analysis of google search results from a given term, this was made massively easier than it could have been by two factors. First, the analysis of the google search results could be done by applying the keyword contains functions I had used earlier. The second point was that a publically avaiable library GSon could convert JSon files into a readable format. Trying to read a normal google page is difficult as the results back tend to just be the display code for the page, not the actual search results. Instead a custom search engine was created (I actually used someone else's link) which returns all results in a JSON format, which could then be parsed by GSon and read by my program.
This was a really fun and challenging project to put together and really helped me to branch my skills out further, exploring the options available through public APIs and the various Libraries people have put together.
Categories: Summer 2015