Once bitten by Open Source, hooked forever?

So some would claim. But having just read the news from Munich, I would re-iterate the need for some soul-searching as to the truth of that claim. The news being that the City of Munich, having decided to switch from Microsoft to Linux in 2004, is considering going back to Microsoft. Sure, there may be some shady business involved but reading the article, there are valid problems that the users are raising.

There are undeniable benefits of Open Source stuff and I won’t bore everyone with going into them again. And undoubtedly some issues stem from users just being so used to Microsoft. But what stood out for me was the comment Munich’s mayor Dieter Reiter made about the complications with managing email, calendars and contacts and that in his view, Linux is sometimes behind Microsoft.

Now before y’all start listing the amazing tools I can sudo onto my Ubuntu machine, that’s not the point. The point is that what Microsoft does offer and which still eludes the Open Source scene is integration and end-user friendliness. Ubuntu sort of makes a stab at that but in my view still falls short.

I will forgo my usual verbosity and simply pose some questions:

  1. Was it really smart of Mozilla to ditch the official development of Thunderbird (their email client) and Lightning (the calendar that goes with it)? Rather than integrating it further with Firefox and coming up with a webmail service based on it?
  2. Why is there still so little cross-project coordination and cooperation in the Open Source scene?
  3. Could this be a painful lesson that OS is not an addictive drug to most users and that they will come off it if they’re having a bad trip? Does this mean that the cavalier way in which most OS projects approach issues of usability and the user interface are coming round big time to bite us?

Don’t get me wrong. I still think it’s the only sustainable way forward, especially for SMLs (small to medium locales). But pride in amazing code will not cut the mustard with Mrs McGinty down the road who just wants something she can use out of the box and link to her phone and with a calendar for her webmail so she won’t forget her next appointment with the orthodontist. Without resorting to command lines that would make Linus weep.

While 420km below the ISS a Dani is sharpening his stone axe

26/05/2014 5 comments

Sometimes the world of software feels a bit like that, a confusing array of ancient and cutting edge stuff.I see you nodding sagely, thinking of the people still using Windows 98 or even more extreme, Windows 3.11 or people who just don’t want to upgrade to Firefox 3 (we’re on 29 just now, for those of you on Shrome). I actually understand that, on the one hand you have very low-key users who just write the odd email and on the other you have specialists (this is most likely something happening at your local hospital, incidentally) who rely on a custom-rigged system using custom-designed software, all done in the days of yore, to run some critical piece of technology and who are loathe to change it since… well… it works. I don’t blame them, who wants to mess around with bleeding tiles when they’re trying to zap your tumour.

But that wasn’t actually what I was thinking about. I was thinking about the spectrum of localizer friendly and unfriendly software. At the one extreme you have cutting edge Open Source developers working on the next generation of localization (also known as l20n, one up from l10n) and on the other you have… well, troglodytes. Since I don’t want to turn this into a really complicated lecture about linguistic features, I’ll pick a fairly straightforward example, the one that actually made me pick up my e-pen in anger. Plurals.

What’s the big deal, slap an -s on? Ummm. No. Ever since someone decided that counting one-two-lots (ah, I wish I had grown up a !San) was no longer sufficient, languages have been busy coming up with astonishingly complex (or simple) ways of counting stuff. One the one extreme you have languages like Cantonese which don’t inflict any changes on the things they’re counting. So the writing system aside, you just go 0 apple, 1 apple, 2 apple… 100 apple, 1,000 apple and so on.

English is a tiny step away from that, counting 0 apples, 1 apple, 2 apples… 100 apples, 1,000 apples and so on. Spot something already? Indeed. Logic doesn’t really come into it, not in a mathematical sense. By that I mean there is no reason why in Cantonese 0 should pattern with 1, 2 etc but that in English 0 should go with 2, 3, etc. It just does. Sure, historical linguists can sometimes shed light on how these have developed but not very often. On the whole, they just are.

This is where it gets entertaining (for linguists). First insight, there aren’t as many systems as there are languages. So much less than 6,000. In fact, looking at the places where such rules are collected, there are probably less than a 100 different ways (on the planet) for counting stuff. Still fun time though (for linguists). Let me give you a couple of examples. A lot of Slavonic (Ukrainian, Russian etc) languages require up to 3 different forms of a noun:

  • FORM 1: any number ending in 1 (1, 11, 21, 31….)
  • FORM 2: ends in 2, 3 or 4 – but not 12, 13 or 14 (22, 23, 24, 32, 33, 34…)
  • FORM 3: anything else (12, 13, 14, 15, 16, 17, 18, 19, 20, 25, 26, 27…)

That almost makes sense in a way. But we can add a few more twists. Take the resurrected decimal system in Scottish Gaelic. It requires up to 4 forms of a noun:

  • FORM 1: 1 and 11 (1 chat, 11 chat)
  • FORM 2: 2 and 12 (2 chat, 12 chat)
  • FORM 3: 3-10, 13-20 (3 cait, 4 cait, 13 cait, 14 cait…)
  • FORM 4: anything else (21 cat, 22 cat, 100 cat…)

Hang one, you’re saying, surely FORM 1 and FORM 2 could be merged. ’fraid not, because while the word cat makes it look as if they’re the same, if you start counting something beginning with the letter d, n, t, s, the following happens:

  • FORM 1: 1 taigh, 11 taigh
  • FORM 2: 2 thaigh, 12 thaigh
  • FORM 3: 3 taighean, 4 taighean, 13 taighean, 14 taighean…
  • FORM 4: 21 taigh, 22 taigh, 100 taigh…

Told you, fun! Now here’s where it gets annoying. Initially, in the very early days of software, localization mostly meant taking software written in English and translating it into German, French, Spanish, Italian & Co and then a bit later on adding Chinese, Japanese and Korean to the list.

Through a sheer fluke, that worked almost perfectly. English has a very common pattern, as it turns out (one form for 1 and another for anything else) so going from English to German posed no problems in translation. You simple took a pair of English strings like:

  • Open one file
  • Open %d files

and translated them into German:

  • Eine Datei öffnen
  • %d Dateien öffnen

Similarly, going to Chinese also posed no problem, you just ended up with a superfluous string because (I’ll use English words rather than Chinese characters):

  • Open one file
  • Open %d file

also created no linguistic or computational problems. Well, there was the fact that in French 0 patterns with 1, not with the plural as it does in English but I bet at that point English developers thought they were home and dry and ready to tick off the whole issue of numbers and number placeholders in software.

Now I have no evidence but I suspect a Slavonic language like Russian was one of the first to kick up a stink. Because as we saw, it has a much more elaborate pattern than English. Now there was one bit of good news for the developers: although these linguistic setups were elaborate in some cases, they also followed predictable patterns and you only need about 6 categories (which ended up being called ONE, TWO, FEW, MANY, OTHER for the sake of readability – so Gaelic ended up with ONE, TWO, FEW and OTHER for example). Which meant you could write a rule for the language in question and then prep your software to present the translator – and ultimately the user – with the right number of strings for translation. Sure, they look a bit crazy, like this one for Gaelic:

Plural-Forms: nplurals=4; plural=(n==1 || n==11) ? 0 : (n==2 || n==12) ? 1 : (n > 2 && n < 20) ? 2 : 3;\n

but you only had to do it once and that was that. Simples… you’d think. Oh no. I mean, yes, certainly doable and indeed a lot of software correctly applies plural formatting these days. Most Open Source projects certainly do, programs like Linux or Firefox for example have it, which is the reason why you probably never noticed anything odd about it.

One step down from this nice implementation of plurals are projects like Joomla! who will allow you to use plurals but they won’t help you. Let me explain (briefly). Joomla! has one of the more atavistic approaches to localization – they expect translators to work directly in the .ini files Joomla! uses. Oh wow. So to begin with, that DOES enable you to do plurals but to begin with you have to figure out how to say the plural rule of your language in Joomla! and put that into one of the files. In our case, that turned out to be

   public static function getPluralSuffixes($count) {
if ($count == 0 || $count > 19) {
$return =  array(‘0′);
}
elseif($count == 1 || $count == 11) {
$return =  array(‘1′);
}
elseif($count == 2 || $count == 12) {
$return =  array(‘2′);
}
elseif(($count > 2 && $count < 12) || ($count > 12 && $count < 19) {
$return =  array(‘FEW’);
}

Easy peasy. One then has to take the English, for example:

COM_CONTENT_N_ITEMS_CHECKED_IN_0=”No cat”
COM_CONTENT_N_ITEMS_CHECKED_IN_1=”%d cat”
COM_CONTENT_N_ITEMS_CHECKED_IN_MORE=”%d cats”

and change it to this for Gaelic:

COM_CONTENT_N_ITEMS_CHECKED_IN_1=”%d cat”
COM_CONTENT_N_ITEMS_CHECKED_IN_2=”%d chat”
COM_CONTENT_N_ITEMS_CHECKED_IN_FEW=”%d cait”
COM_CONTENT_N_ITEMS_CHECKED_IN_OTHER=”%d cat”

Unsurprisingly, most localizers just can’t be bothered doing the plurals properly in Joomla!.

Ning is another project in this category – they also required almost as many contortions as Joomla! but their mud star is for having had plural formatting. And then having ditched it because allegedly the translators put in too many errors. Well duh… give a man a rusty saw and then complain he’s not sawing fast enough or what?

And then there are those projects which stubbornly plod on without any form of plural formatting (except English style plurals of course). The selection of programs which are still without proper plurals IS surprising I must say. You might think you’d find a lot of very old Open Source projects here which go back so far that no-one wants to bother with fixing the code. Wrong. There are some fairly new programs and apps in this category where the developers chose to ignore plurals either through linguistic ignorance or arrogance. Skype (started in 2003) and Netvibes (2005) for example. Just for contrast, Firefox was born in 2002 and to my knowledge always accounted for plurals.

Similarly, some of them belong to big software houses which technically have the money and manpower to fix this – such as Microsoft. Yep, Microsoft. To this date, no Microsoft product I’m aware of can handle non-English type plurals properly in ANY other language. Russians must be oddly patient when it comes to languages cause I get really annoyed when my screen tells me I have closed 5 window

A lot of software falls somewhere between the two extremes – I guess it’s just the way humans are, looking at the way we build our cities into and onto and over older bits of city except when it all falls down and we have to (or can?) start from scratch. But that makes it no less annoying when you’re trying to make software sound less like a robot in translation than it has to…

PS: I’d be curious to know which program first implemented plurals. I’m sort of guessing it’s Linux but I’m not old enough to remember. Let me know if you have some insights?

PPS: If you’re a developer and want to know more about plurals, I recommend the Unicode Consortium’s page on plurals as a starting point, you can take it from there.

Sometimes being anal-retentive works

But mostly, it doesn’t. As is my conclusion regarding the “security” settings in Windows 8 where they’ve frankly tied themselves into a knot that would do the Midgard Serpent proud.

I only became aware of this knot when trying to install a program recently, in my case this was the highly innocuous LibreOffice update (which is basically a re-install that keeps your personal files and addons rather than an upgrade). So for the purposes of what I was doing, let’s treat this as a new installation. You get half way through and what happens? Error 1303 is what happens, the one about “installer has insufficient privileges to access blablabla”:

So basically it’s telling me that I, the one and only user of this machine who also happens to be logged in as an admin, doesn’t have the necessary rights to install a program. Rrrright…

There are two ways I can look at this. The cynic in me says they’re trying to force the bulk of users (who are out-of-the-box users who don’t “mess” with their systems) into using the pre-installed, approved and expensive junk their computers come with. Because the solutions to this problem start at the Gordian level and spiral upwards, some involving command prompts or a staggering array of permission setting windows that looks more like a digital card-house than system administration.

The other of course is sheer idiocy, where some developer figured that the best way of stopping users from cough using their systems would be the implement a fiendish array of permissions and user levels that would prevent unauthorised programs from installing themselves or users from accidentally messing up things. The only Ymir-sized snag is that you end up with users, desperate to install the things they actually want, from fiddling around with the permission settings for users and admins. Usually in the form of trying to create at least one super-user to get around all these issues. Which brings us round in a neat circle, where anyone gaining illegal access to the system has all the privileges they could ever want. I believe sporting natures describe that as an “own goal”. Nice one chaps.

Oh, but I did find a fairly simple workaround in the end. Amusingly, this anal-retentive approach seems to apply mainly to system folders and folders the system created. Such as Programs or Programs (x86). If you tell the installer to create a new directory, such as C:\Programan\LibreOffice4\, then it doesn’t bat an eyelid. “Oh my” as George Takei would say…

When peer review goes pear shaped

29/01/2014 2 comments

Well I’m glad I asked. What happened was this…

I had a request from someone asking if I could localize TinyMCE (a WYSIWYG editor – think of it as a miniature form of Word sitting within a website) so they could use it on their website for their Gaelic-speaking editors. There aren’t that many strings and the project is handled on Transifex using po files so the process seemed straight-forward too (if you don’t know what a po file is  – the main thing about them is that there are many translation memory packages which handle them and, if you have already done LibreOffice or something like that and stored those strings in the memory, there will be few strings in a project like TinyMCE for which there are no translation memory suggestions. In a nutshell – it allows an experienced software translator to work much faster).

So off I go. Pretty much a cake-walk, half a Bond film and 2 episodes of Big Bang later, the job was done. Now in many cases once a language has been accepted for translation and when you have translated all or at least most of the project, these translations will show up in the released program eventually. But just because I’m a suspicious old fart (by now), I messaged the admins and asked about the process of getting them released. Good thing too. Turns out they use an API to pull the translations from Transifex and onto their system (they’ve basically automated that step, which I can understand). The catch however is that it only grabs translations set to Reviewed.

Cue a groan from me. To cut the TinyMCE story short at this point, it seems this is down to Transifex (at least according to the TinyMCE admin) so they were quite happy for me to just breeze through them and set them to Reviewed myself. Fortunately it wasn’t a large job so 15 minutes later (admittedly, I have a about 14 other jobs on my desk just now which I would have rather done…), they were all set, thank goodness to keyboard shortcuts.

But back to the groan. I have come across this approach before and on the face of it, it makes sense. If you do community translation (i.e. you let a bunch of volunteers from the web translate into languages you as admins don’t understand and don’t have time to QA) but you’d like to have at least some measure of QA over the translations, by adding this step of peer reviewing, you can be at least more or less sure that you’re not getting ‘Jamie is a dork’ and ‘Muahahaha’ type translations.

The only problem is, peer review in online localization relies on large number of volunteers. Only a small percentage of speakers have any inclination towards translating pro bono publico and even fewer feel like reviewing other people’s translations (there is something slightly obscene about proofreading, it’s like having someone else put words in your mouth, they almost always taste funny…). I once did some rough and ready stats on the percentages of people of a given language who will be engaged in not-for-profit localization (of mainstream projects like Firefox or LibreOffice). It’s about ONE active localizer for every 500,000 speakers. So German can call upon something like 20 really active localizers. Scottish Gaelic on the other hand statistically has … well, it has less than 60,000 speakers. You work it out. So it’s seriously blessed by having TWO of them.

In any case, even if you disbelieve my figures (I’d be the first to admit to not being great shakes at numbers), the percentages are really small. So if you set up a translation process that necessitates not only translation but also peer review, you’re essentially screwing small languages because the chances are there will never be a reviewer with enough time or energy (never mind ability) to review stuff. It’s one of the reasons why we haven’t touched WhatsApp yet, they simply won’t let a translation into live without review.

So if you design a process like that and want to make sure you’re not creating big problems for smaller languages (and we’re not just talking Gaelic-style tiny languages, even languages like Kazakh or Estonian have such problems) make sure you

  • allow enough wriggle-room to over-ride such requirements, for example by allowing a localizer to demonstrate their credentials (for example through long-term participation in other projects) and
  • design a system where, if it’s absolutely necessary to set specific tags, admins can bulk-tag translations for a certain language.

Over and out.

A bit of lexicographic navelgazing

11/05/2013 12 comments

Sometimes it’s not the developers fault. Shocking, I know. Sometimes, it’s the linguistic community (using the term loosely) who is at fault for not asking for the right thing.

mojave

Just in case you didn’t believe me…

I was looking up something in my Mojave dictionary to other week (don’t ask), followed by a Google search which pointed me at a Gaelic dictionary for administrative terminology. While the two are probably some 15 years apart, they have one thing in common. They’re both “flat” documents (there’s probably a niftier term but I can’t think of it). The Mojave dictionary is a ring-bound, printed dictionary, the Gaelic one some form of PDF. Now, don’t get me wrong, I have a love affair with dictionaries – I collect dictionaries like my mother shoes and handbags. So the more the merrier.

But what also hit me is how much of a debt of gratitude the Scottish Gaelic world owes a chap called Kevin Scannell. While doing some research in Dublin, we met at the Club Cónradh na Gaeilge for a pint and a chat about Irish IT and he explained to me, in short and simple phrases, what a lexical database is and why anyone should give a monkey’s about having one. That chat came at a pivotal moment because, having just finished the digitization of Dwelly’s classical Gaelic dictionary, I had both come to realise some of the inherent shortcomings of paper dictionaries in digital form and I was at the cusp of embarking on the development of what is now the Faclair Beag along with Will Robertson.

Now there are many forms a lexical database can take but essentially the difference between a massive wordlist and a lexical database is that in the database you don’t just list words but you also mark them for what they are. For example, rather than having file, grunt, apple, horse, with, and in a comma separated list, such a database would mark file, grunt, apple and horse as being a singular noun, a second instance of file and grunt as regular verbs, and as a conjunction and with as a preposition. You can get a lot more fancy than that but at a very basic level, that’s the difference.

So what you might say, you still end up with a dictionary at the other end and doing a database involves a whole lot of extra work. Tru dat, but the other word I learned on that trip was the word future proofing. What that means is that if you write a beautiful dictionary as a flat text document, you get just that, a beautiful dictionary. Useful, and for many centuries the only kind available. But that was down to technical limitations, not necessarily choice. Anyway, such a dictionary is future proof only to a certain extent. If it’s digital (as opposed to typed, which unbelievably still happens…) you can edit bits for an new edition. You can put it online and, with a bit of messing about, even turn it into a searchable dictionary. But that’s just about it, anything beyond that involves an insane amount of extra work. For example, your dictionary may list 20,000 heardwords but there will be a lot of word forms which aren’t headwords: plurals, past tenses, words which only appear in examples but not as headwords and so on.

But I can look up the plural of goose. Yes, that’s true but say for example you wanted to do something beyond that. For example, you might be a linguist interested in word frequencies, wanting to find out how common certain words are. Do a word search in your text? Possible, but then you end up with a number for goose and another for geese. And in some languages the list of forms a word can take is huge, in Gaelic the goose can show up as gèadh, ghèadh, geòidh, gheòidh, gèadhaibh and ghèadhaibh.

But it’s not just nice for linguists. The applications of even a basic lexical database are impressive. Let me continue with the practical example to illustrate this. If you search for bean in the Faclair Beag, you end up seeing this entry at the top:

afb1 But what the casual dictionary user does not realise is that behind the scenes, things look a little different:

afb2We decided to keep it fairly simple and devised different tables for the different types of words we get in Gaelic – feminine and masculine nouns, verbs, prepositions and so on. And for each, we made a table which covers the different possible forms of each word. For a Gaelic noun, that means lenition, datives, genitives, vocatives, singular and plural, plus a junk field for anything that might be exceptional.

Yes, it’s a bit of extra work but one immediate benefit is that because each form is tied to the ID of the root, it doesn’t matter if a user sticks in a form like mhnàthadh – the dictionary will still know what to look for. That’s a decided bonus for people who are inexperienced or looking for a rare inflected form they’re unsure of. It also cuts down the number See x entries because if two words are simply variations of the same root (like crèadh and criadh in Gaelic which are both pronounced the same way and mean the same thing). So usability is an immediate benefit.

Next benefit is an almost instantaneous and updatable spellchecker – as long as the data you punch in is clean, all you have to do is export the table and dump it in Hunspell for example. Ok, it involves a little more fiddling that that but compared to the task of extracting all words from a flat text file, it’s a doddle. For example, I was asked if we could do something for Cornish based on the Single Written Form dictionary. The answer was yes, but I don’t have time to extract all the words manually. In addition, our spellchecker is a lot leaner and smarter as a result because we were able to define certain rules, rather than multiply entries. For example, Gaelic has emphatic endings that can be added to any noun: -sa, -se, -san etc. So rather than add them manually to each noun, Kevin could just write a rule that said: if the table says it’s a noun, allow these endings. Simples.

Ok, so you get a spellchecker, big deal. It is, actually but anyway, another spin-off was predictive texting for Gaelic (again with help from the indefatigable Kevin), because all we had to do was to take the list and fiddle with the ranking. Simplifying a bit but again, when compared to doing it manually off a flat text file, it’s a lot less work. Another spin-off was a digital Scrabble for Gaelic and several other word games like hangman. Oh, the University of Arizona asked for a copy to help them tag some Gaelic texts. And we’re not finished by a long shot.

Did I mention the maps? Perhaps too long a story for here but using our database we have been able to build dialect maps on steroids, like this one here indicating the word in question is southern:

And I’m sure there are other uses that we haven’t even though of yet but whatever the development, we’re fairly future proof in the sense that with a bit of manipulation, we can make our dictionary data dance, sing, foxtrot and rumba, not just perform Za Zen.

Which brings me back to my original point. People in the world of small languages could benefit from doing their homework and rather than rushing into something, go a bit more slowly and build something that is resilient for the future – even if “Let’s do a dictionary and publish it next year” sound waaaay sexier. A database is something most developers can build and while it takes a bit more time, you don’t require a rocket scientist to add the language data – but in order to get it built, you have to ask for it in the first place.

Hidden, hidden, gone?

06/03/2013 4 comments

I’m compuzzled, as my old flatmate would say. Why is it that software projects often hide their MC900319516nicest features away in the dark little corners of a site? Are they afraid it might be successful? Are they trying to hedge their bets in case it flops? Or are the explanations even more complicated?

Not sure I have the answer but let me expand with an example or two to begin with. You may remember my post on doing predictive texting in Gaelic, Irish and Manx. A short while ago, when talking to the developers over at Adaptxt learned to my dismay that while they were keen on enabling the technology on iPhone and Windows Phones, neither was going to happen. I already knew about iPhones being anal-retentive when it comes to localization and entry methods but I was dismayed that Windows seemed to be going down the same root. Not that I have or will have a Windows Phone but I’m not the measure of things. Other people might well buy one. Wait, so we may well get a localized version of the Windows Phone but no predictive texting in Gaelic? Surely not… So I decided to do a little digging and found that apart from the developers at Adaptxt sadly being right, the Windows Phone site has a feature suggestion tool.

Incidentally, we have a small campaign going to lobby Microsoft to allow 3rd Party Entry Methods (or, in English, the option for people to develop, offer and install tools like predictive texting in languages Microsoft isn’t interested in). Every vote counts :)

But anyway, it’s a nice idea, a feature suggestion page. So why is it hidden underneath so many layers? I actually have no idea how you’re supposed to reach that page from the front page and only happened to chance across it through some crafty manipulation of Google (I’m not a developer but I’m very good at finding stuff on the web…). Are they afraid people might actually participate en masse? Are they worried a developer might have to confront the fact that in reality, feature X sucks?

Google went through an even stranger metamorphosis. Back in the early days when Google was still new, they tried very hard to get folk involved and localization featured quite prominently in that. So the link to the Google In Your Language project was quite prominent and, naturally, I jumped at the chance of putting Gaelic on Google. What happened then was a bit like the St Brigid’s cross shrinkage in the RTÉ logo… first the prominent link went. Well.. ok, I had bookmarked it and it wasn’t that hard to find via a search. The the associated forum went dead. Then Google In Your Language was axed (of course without telling the translators). Bizarrely, the page is still up proclaiming that

Google believes that fast and accurate searching has universal value. That’s why we are eager to offer our service in all the languages scattered upon the face of the earth. We need your help to make this a reality.

You can volunteer to translate Google’s help information and search interface into your favorite language. By helping with our translation process you ensure that Google will be available in your language of choice more quickly and with a better interface than it would have otherwise. There is no minimum commitment. You can translate a phrase, a page or our entire site. Once we have enough of the site translated, we will make it available in the language you are requesting.

If you are interested in helping us, please read the translation style guide, frequently asked questions list, and the legal stuff. Then click on the link at the right to sign up as a volunteer.

We hope you enjoy working on our Google translation project and thank you for helping to make Google a truly worldwide web service.

Ha bloody ha. These days the cynical part of myself poses the question if they had always planned this or if this was something that just happened? We’ll probably never know as nobody knows nutting or at least nobody is telling us nuttin. But I wouldn’t put it past them to have done the cynical thing.

Or maybe organisations like Microsoft and Google are just as badly organised as smaller organisations. I know of at least one online dictionary project where the publisher, an academic institution, ummed and erred about whether to produce a digital searchable version of dictionary or not. Over several years. When it looked like they were just going to let it die a silent death, someone with a bit of chutzpah just did it and stuck in on a corner of the institutions website. At some point it had become such an institution that it was quietly accepted as the status quo – which of course also meant they didn’t have to support it financially. Accident or design? Who knows. Badly organised in any case.

So is it just sheer incompetence, a lack of imagination or empowerment, too many or not enough hoops that one has to jump through? I don’t know but it sure is annoying… in this spirit, time for a glass of Chambord a kind soul donated. Slàinte mhòr!

Needle in a haystack

09/02/2013 5 comments

It’s been a strange sort of end to the week. I e-met a new language and came face to face with a linguistic, digital needle in a cyberhaystack. Ok, I’m not making much sense so far, I know… just setting the scene!

We all know Skype, the new version of which (quoting my hilarious brother) “convinces through less functionality and more bugs”.  Back when Skype still belonged to itself, I eventually discovered the fact that, at least on Windows, it’s pretty easy to localize. You go to Tools » Change Language » Edit Skype Language file and right down there where everyone can see it, you have the option to save the English.lang file (which contains the English strings) under a new name and add your own translation. So back in 2011 I started working on a Gaidhlig.lang and by early 2012 had finally caught up with all the updates that kept getting in the way.

LiNiha

The Li Niha (Nias) interface

 

What does one do when one has completed a translation? Sure, you submit it to the project and ask them to bundle it, release it, whatever. Not so fast, buckoes… Due to “size issues” (I’d like to remind everyone at this point that currently, a full language file weighs in at a massive 400KB), Skype only bundles the usual 20 or so suspects, CJK (that’s Chinese, Japanese and Korean) and a bunch of European languages with the install file. Since they never though of adding an Install new language function that could pull a file from some repository, the short of it was that even having localized the lot, you were on your own. Sure, you could post the file as an attachment on the forum but then who goes trawling through a forum in search of a language file?

Using the usual “Gaelic” channels, I think we’ve reached a reasonable number of people so far but certainly less than we would have reached had it been “inside the program itself.

But before I knock the old forum too much, I should point out that it actually had a dedicated localization section. Why do I mention this? Because, moving to the next episode where we finally meet Mr Big, when Skype was bought by Microsoft, the forums were wiped and *cough* improved. That’s right, the localization section went. Especially the parts where people were trying very hard to figure out how to turn a .lang file into something that Linux and MacOS could digest. Am I glad I took copies of the bits that were useful…

Anyway, even in the new forum, the localization questions never went away. But the stock answer of the one admin who bothers to check that corner is always that “there’s no news”. In fairness, I don’t think he actually has the power to do anything, he’s just the unfortunate person who has to interact with, shock and horror, the users. So even though Skype was first launched in 2003, here we are in 2012 still asking the same questions – why can’t you bundle our language, why can’t we convert/localize the files for MacOS/Linux and how about frickin plural formatting?

Yep, “there’s no news”. The chap working on Welsh then had an interesting suggestion – can’t we host them on SourceForge? You see, the problem with distributing the files via the forum is that once your post moves off the first page, who’s going to see it? So, brilliant idea I thought and we went about setting up a project. Nothing fancy, just the .lang files which don’t come bundled with Skype and a few Wiki pages with guidance.

Seeing I had a quiet day and since my contributions in terms of code are… amusing, I decided to hit the web to locate all the .lang files out there, or as many of them as possible anyway – I may suck at code but I rock at websearches! Half a day later, I had the most amazing collection of languages. Some I had known about – Gaelic, Welsh, Cornish, Irish and Uyghur – as their translators had been active on the forum. Some were part of the usual suspects but some were totally unexpected and one I’d never even heard about which is, as a matter of fact, rather unusual. So in the end, we had:

  1. Adyghe
  2. Afrikaans
  3. Albanian
  4. Armenian
  5. Basque
  6. Breton
  7. Chuvash
  8. Cornish
  9. Erzya
  10. Esperanto
  11. Faroese
  12. Gaelic
  13. Irish
  14. Ligurian
  15. Macedonian
  16. Mirandese
  17. Nias
  18. Tajik
  19. Tamil
  20. Uyghur (Persion and Latin script)
  21. Welsh

Definitely wow. Admittedly, not all are complete but it’s still one of the most diverse lists I’ve ever come across, even if there are no languages from the Americas in the list. Especially Adyghe, Chuvash and Erzya are not languages you normally see on localization projects. And Nias I had never even heard about. Turns out it’s a language of some 700,000 speakers off the coast of Sumatra. That certainly cheered me up. Yeah I know, geek :)

But what made me shake my head all afternoon was something else – the lengths I had to go to in manipulating my websearches and the places I found some of them. Gaelic I had, Welsh, Albanian and Cornish came of Skype’s forum. Basque (normally a rather well organized language) I found embedded as a .obj file on some archived forum post. Adyghe, Chuvash and Erzya came of some websites that looked a bit like a forum where someone had posted, in the case of Erzya without linebreaks, the translations – in two cases, with the Russian strings still embedded so I had to strip those out first before creating the .lang files. Armenian came out of a public DropBox and Breton off the Ofis ar Brezhoneg website. Afrikaans was on some unlinked page on someone’s personal website. Esperanto was on the Wiki of the Universala Esperanto Asocio but it took me some time to figure that in order to get the strings, I had to trawl through the page history as someone had at some point – accidentally or deliberately – deleted them. Mirandese and Nias were in some silent loop on abandoned university websites – probably student projects from long ago. And one came off a file sharing site, I forget which, making me seriously wonder if I was downloading porn, a virus or actually the .lang file. I actually even found Kurdish but the people who did that seem to have accidentally stripped out the string names so having explained the problem, they’re trying to match them together again as my Kurdish isn’t that baş.

I didn’t quite know whether to congratulate myself or whether to cry. All that effort, all those wonderfully selfless people putting their time and effort into translating something into their language. And then, because the people making money off it couldn’t be bothered, we ended up with these needles in the cyberhaystack. Crying is still an option I feel…

It’s nice to know they’re on SourcForge now (check out SkypeInYourLanguage) and that there’s a few people willing to put some time into making the process a bit better but by gum guys… if people are actually willing to help you make more money by making your product available in more languages, how about giving them a leg up, rather than the finger?

Follow

Get every new post delivered to your Inbox.