Archive

Posts Tagged ‘cell’

One forward and two to the side

25/06/2012 2 comments

The debate about digital technology and localization and internationalization has probably raged in one form or other ever since someone invented the first program. Mind, for me personally it goes back to that ill-fated moment when ASCII was born with some bright spark arguing that no one would ever need more than those few letters that English has. My first computing headaches were around ASCII – how do I do an /ɣ/ and what the heck was %73£ when someone typed it at the other end?

Much has happened since and I’ve moved from phonology to software translation big time but I still can’t quite decide whether we’re in a better place now or not when it comes to small languages. Those technicalities (like ASCII vs Unicode) aside, the field has indeed opened up, in particular when it comes to open source software. There’s nothing but laziness that stops a language from having at least an office suite (LibreOffice), a browser (Firefox or Opera), an email client and calendar (Thunderbird and Lightning), a media player (VLC), a wiki (MediWiki), a spellchecker, a forum package (phpBB) and blogging software (WordPress.org and .com) – satisfying a fair chunk of your average user. For the really tough there’s Linux in all its scary glory of course. Ignoring the height of the bar when it comes to actually localizing some of them, that’s not the whole story though.

At least in digitized countries, a significant chunk of our work and social lives have shifted onto various digital platforms. Desktops, laptops, smartphones, tablets… you name it. Hardly a year goes by without some innovation hitting the headlines. And the tech savy (overwhelmingly the young) have become real digital nomads. Yesterday’s app is so passé today and today’s market leader mobile phone OS may be tomorrow’s digital roadkill (anyone remember Symbian?). It’s a bewildering, fluid place.

It’s a place we can’t ignore. Whether we like it or not, virtually anyone under the age of 25 has a smartphone, from rocky outcrops in the Western Ocean like Barra to the mountains of Gipuzkoa, the deserts of Arizona and the steaming hills of Papua New Guinea. Ok, maybe not Papua New Guinea yet though it wouldn’t surprise me. The more of a space we can carve out for out languages and cultures, the better because sadly the old maxim of “Use it or lose it” – or however your language puts that – is true.

So we must compete somehow, at least at some base level. But I increasingly feel that without a small but dedicated full time team, this will become harder and harder unless there’s some magic on the way that I haven’t heard about. Let me give you an example. Predictive texting goes back to the 1970s, believe it or not but not wanting to be too depressive about it, it probably did not make huge inroads into our lives before the year 2000 or so when it really took off on phones. Back then, you had those languages which your manufacturer deemed appropriate, maybe a dozen or so if you were lucky. We’re now in 2012 and I’m waiting with bated breath for the first release of Irish, Scottish Gaelic and Manx on Adaptxt which, after much searching, I discovered last year. Finally an open source predictive texting project open to any language. Yay! Ok, so it only works on Android… I can live with that, looking at the Android market share. It would be good if iPhones also supported 3rd party entry methods but they don’t and I’m getting to the cheesed off stage with Apple’s approach to non-billion-speaker-languages anyway.

But I digress. There we are, happily preparing the tool which will finally take Scots Gaelic and Manx out of the letter-by-letter age (Irish has had Téacs since 2008 but I’m not sure how alive the project is) when Apple starts pushing Siri (that voice recognition thing on iPhones which, by the way, only works if your accent resembles that of the Queen and or Charleton Heston). I bet my bottom dollar that before long, every major mobile phone manufacturer will be running something similar.

Here, I gnash my teeth. Predictive texting is reasonably easy to do as long as you have a framework you can feed your data into. For example a spellchecker. But it’s taken around a decade for such a framework to grow out of the cyber community. Speech recognition is a harder. A lot harder. I have no idea how long it will take for languages such as Gaelic to take that hurdle and even less so of how many of this planet’s 6,000 languages will manage to do so. And that makes it all a little frustrating.

I don’t know what the answer is, right now, I just feel it would be nice if stuff slowed down a bit. Honestly, how much technological innovation do we need in 12 months? Or rather, how many false summits can we and our languages keep pace with?

Advertisements