Archive

Archive for the ‘Localization’ Category

How to stonewall Open Source

07/03/2015 3 comments

I seem to be posting a lot about Google these days but then they ARE turning into the digital equivalent of Nestlé.

I’ve been pondering this post for a while and how to approach it without making it sound like I believe in area 52. So I’ll just say what happened and let you come to your own conclusions mostly.

Back when Google still ran the Google in Your Language project, I tried hard to get into Gmail and what was rumoured to be a browser but failed, though they were keen to push the now canned Picasa. <eyeroll> Then of course they canned the whole Google in Your Language thing. When I eventually found out that Google Chrome is technically nothing else than a rebranded version of an Open Source browser called Chromium, I thought ‘great, should be able to get a leg into the door that way’. Think again. So I looked around and was already confused because there did not appear to be a clear distinction between Chromium and Chrome. The two main candidates were Launchpad and Google Code. So January 2011 I decide to file an issue on Google Code, thinking that even if it’s the wrong place, they should be able to point me in the right direction. The answer came pretty quick. Even though the project is called Chromium, they (quote) don’t accept third party translations for chrome. And nobody seems to know where the translations come from or how you become an official translator. A vague reference that I maybe should try Ubuntu.

I gave it some time. Lots of time in fact. I picked up the thread again early in 2013. Now the semi-serious suggestion was to fork Chromium and do my translation on the fork. Very funny. Needless to say, I was getting rather disgusted at the whole affair and decided to give up on Chrome/Chromium.

When I noticed that an Irish translator on Launchpad had asked a similar question about Chromium and saw the answer was they, as far as they know, push the translations upstream to Chromium from Launchpad, I decided I might as well have a go. As someone had suggested, at least I’ll get Chromium on Linux.

Fast forward to October 2014 and I’m almost done with the translation on Launchpad so I figure I better file a bug early because it will likely take forever. Bug filed, enthusiastic response from some admin on Launchpad. Great, I think to myself, should be plain sailing from here on. Spoke too soon. End of January 2015, the translation long completed, I query to silence and only get more silence. More worryingly, someone points me at a post on Ubuntu about Chromium on Launchpad being, well, dead.

Having asked the question in a Chromium IRC chat room, I decided to have another go on Google Code, new bug, new luck maybe? Someone in the room did sound supportive. That was January 28, 2015. To date, nothing has happened apart from someone ‘assigning the bug to l10n PM for triage’.

I’m coming to the conclusion that Chromium has only the thinnest veneer of being open. Perhaps in the sense that I can get a hold of the source code and play around with it. But there is a distinct lack of openness and approachability about the whole thing. Perhaps that was the intention all along, to use the Open Source community to improve the source code but to give back as little as possible and to build as many layers of secrecy and to put as many obstacles in people’s path as possible. At least when it comes to localization.

At least Ubuntu is no longer pushing Chromium as the default browser. But that still leaves me with a whole pile of translation work which is not being used. Maybe I should check out some other Chromium-based browsers like Comodo Dragon or Yandex. Perhaps I’m being paranoid but I’m not keen on software coming from Russia being on my systems or recommending it to other people. Either way, I’m left with the same problem that we have with Firefox in a sense – it would mean having to wean people off pre-installed versions of Google Chrome or Internet Explorer.

Anyone got any good ideas? Cause I’m fresh out of…

When peer review goes pear shaped

29/01/2014 2 comments

Well I’m glad I asked. What happened was this…

I had a request from someone asking if I could localize TinyMCE (a WYSIWYG editor – think of it as a miniature form of Word sitting within a website) so they could use it on their website for their Gaelic-speaking editors. There aren’t that many strings and the project is handled on Transifex using po files so the process seemed straight-forward too (if you don’t know what a po file is  – the main thing about them is that there are many translation memory packages which handle them and, if you have already done LibreOffice or something like that and stored those strings in the memory, there will be few strings in a project like TinyMCE for which there are no translation memory suggestions. In a nutshell – it allows an experienced software translator to work much faster).

So off I go. Pretty much a cake-walk, half a Bond film and 2 episodes of Big Bang later, the job was done. Now in many cases once a language has been accepted for translation and when you have translated all or at least most of the project, these translations will show up in the released program eventually. But just because I’m a suspicious old fart (by now), I messaged the admins and asked about the process of getting them released. Good thing too. Turns out they use an API to pull the translations from Transifex and onto their system (they’ve basically automated that step, which I can understand). The catch however is that it only grabs translations set to Reviewed.

Cue a groan from me. To cut the TinyMCE story short at this point, it seems this is down to Transifex (at least according to the TinyMCE admin) so they were quite happy for me to just breeze through them and set them to Reviewed myself. Fortunately it wasn’t a large job so 15 minutes later (admittedly, I have a about 14 other jobs on my desk just now which I would have rather done…), they were all set, thank goodness to keyboard shortcuts.

But back to the groan. I have come across this approach before and on the face of it, it makes sense. If you do community translation (i.e. you let a bunch of volunteers from the web translate into languages you as admins don’t understand and don’t have time to QA) but you’d like to have at least some measure of QA over the translations, by adding this step of peer reviewing, you can be at least more or less sure that you’re not getting ‘Jamie is a dork’ and ‘Muahahaha’ type translations.

The only problem is, peer review in online localization relies on large number of volunteers. Only a small percentage of speakers have any inclination towards translating pro bono publico and even fewer feel like reviewing other people’s translations (there is something slightly obscene about proofreading, it’s like having someone else put words in your mouth, they almost always taste funny…). I once did some rough and ready stats on the percentages of people of a given language who will be engaged in not-for-profit localization (of mainstream projects like Firefox or LibreOffice). It’s about ONE active localizer for every 500,000 speakers. So German can call upon something like 20 really active localizers. Scottish Gaelic on the other hand statistically has … well, it has less than 60,000 speakers. You work it out. So it’s seriously blessed by having TWO of them.

In any case, even if you disbelieve my figures (I’d be the first to admit to not being great shakes at numbers), the percentages are really small. So if you set up a translation process that necessitates not only translation but also peer review, you’re essentially screwing small languages because the chances are there will never be a reviewer with enough time or energy (never mind ability) to review stuff. It’s one of the reasons why we haven’t touched WhatsApp yet, they simply won’t let a translation into live without review.

So if you design a process like that and want to make sure you’re not creating big problems for smaller languages (and we’re not just talking Gaelic-style tiny languages, even languages like Kazakh or Estonian have such problems) make sure you

  • allow enough wriggle-room to over-ride such requirements, for example by allowing a localizer to demonstrate their credentials (for example through long-term participation in other projects) and
  • design a system where, if it’s absolutely necessary to set specific tags, admins can bulk-tag translations for a certain language.

Over and out.

Wishful thinking à la Bretonne

03/02/2013 8 comments

Have you noticed that sometimes developers DO get it right but then are faced with strange user behaviours? No, I’m not talking about developers thinking that something should be the case, which isn’t. I’m talking about a strange chain of events on Facebook which makes me doubt the motivation of some language activists (yes, we’re allowed to self-criticize guys!).

We all know about Facebook. What we don’t all know about Facebook is that they have a pretty bizarre approach to translations (we can hardly call it localization…) and I don’t mean the fact they, for the most part, rely on community volunteers. No, it’s the process. There’s no clear process of adding or registering a new project and heaven knows how they actually pick the languages. At one point, Rumantsch was in (it now isn’t, no idea how it got in or why it’s now out, it’s a fairly small language with between 35,000 and 60,000 speakers), as is Northern Sami, Irish, Mongol and the usual big boys, including some questionable choices like Leet Speak and Pirate. So most languages are out. Not surprisingly, this has led to a number of Facebook groups and campaigns by people trying to get their  languages into the project. There used to be a project page full of posts along the lines of “please add my language” and “how do we get Facebook to add our language?” – universally met with thundering silence. Admins were rarer than Lord Howe Island stick insects.

Back in whenever, a chap called Neskie Manuel had a crafty idea, about getting his language, Secwepemctsín, onto Facebook. Why not, he figured, find a way of overlaying Facebook with a “translation skin” in order to make the process of translation (and in this case even localization) independent of Facebook & Co? It was a neat idea, which was somewhat interrupted by his sad and untimely death.

Now, round about the same time, two things happened. The Bretons set up a “Facebook in Breton” compaign. Fair enough. And a chap called Kevin Scannell took on board Neskie’s Facebook idea. Excellent. Before too long, the Facebook group had over 12,000 members and Kevin had released his script for a slew of amazing languages. It overlays not all of Facebook but just the most visible strings (the one’s we see daily, not the boring EULAs and junk). Even more amazingly, it can handle stuff Facebook hasn’t even woken up to yet, such as plurals, case marking and so on. Wow indeed.

The languages hailed from the four corners of the planet, from Aragonese, Manx and Nawat through Hiligaynon, Secwepemctsín, Samoan, K’iche’ and Māori to Kunwinjku and Gundjeihmi (two Australian languages). Wow indeed. And, of course Breton.

Now here’s the bizarre thing though. Ok, it’s not the full thing but who’d turn down a sandwich while waiting for a roast chicken that might never appear? No one, you’d think, so based on a combined market share of some 50% between Firefox and Chrome, some 200,000 speakers and 12,000 people in the “Facebook in Breton” group, you’d expect what, anything north of 6,000 enthusiastic users of the Breton script. After all, more than 1,100 people installed it in Scottish Gaelic (less than 60,000 speakers) and more than 500 people in Manx (way less than 2,000 fluent speakers).

A case of “you’d think” indeed. To date, a mind-boggling 450 people have installed it in Breton. As far as I can tell, the translation is good and was done by a single, highly fluent speaker (Fulup Jakez who works for Ofis ar Brezhoneg). So it’s not a quality issue. The scripts work (I use the Gaelic one) so it’s not that either. The Facebook group was notified several times, so it’s not like they didn’t know. Ok, so maybe not all Likes of the group actually are from speakers, fair enough, but glancing through the active posters, a lot of them seem to be in the right “linguistic area”.

So while the groupies are still foaming at the mouth about the lack of support from Zuckerberg and Co, there’s a perfectly good interim that would allow you to say Kenavo to French and Degemer mat to Breton on Facebook every day. I really don’t get it. Is it really the case that some activists are more in love with the idea of the thing than would actually use it if it was around? Or am I missing something really obvious? I sure hope I am…

On a more positive note, I hope the general idea of this type of “overlay” will eventually take off big time. We will never be able to convince the big boys to support all the languages on the planet, all of which are equally worthy of services in their own languages, whether they’re trying to re-grow lost speakers or whether they’re just a small to medium sized community. So having a tool that puts control over what we see on our screens into our hands would be great. No more running from company to company trying to make the case for adding language X, a little less duplication (I don’t know how many zillion times I’ve translated “Edit picture”), better quality and more focus on the important bits of an interface to translate (not the EULA for example… a document that sadly every software company is keen to have translated as soon as possible without ever asking who’ll read it). Ach well, I can hope…

The what keys?

02/06/2012 4 comments

I’ve just been through a head-scratching exercise and before you suggest anti-dandruff shampoos, it was about access keys. Yes… or was it shortcut keys? Or a hotkey? Or a quick key? Which sums up part of the problem – there’s too damn many of them. Now the basic idea is solid – access keys are keyboard combinations which allow you to instruct your computer to carry out frequently used tasks without having to click through the menu. So far, so good. For examply, on Windows CTRL c has long been the command for copy and CTRL v for paste. Then there’s CTRL z for undo and… errr… yes, to be honest, that’s all I ever use and I use PCs a lot, more than I care to think.

I don’t know who invented the first access key but our friends the consumers-of-too-many-pizzas must have thought this was brilliant. If copy and paste access keys are good, surely there must be other useful ones… like for open, save, close, tools, help, save as, pluck a chicken, pick your nose… and soon the whole program was peppered with the damn things. Not only that one program of course … wherever it has started, it soon spread to the rest and like the thing about electric plugs, everyone used a different name and a different key combination without ever giving a thought to the end user. Was it CTRL j, ALT j, ALTGR j or ALT CTRL SHIFT j? Or ALT OPTION or hang on, that was my DoodleBug program on Windows, I’m now on a Mac in VLC. Should I use the Apple button or Fn?

I bet if you did some research, you’d find that a lot of people only ever use a minute fraction of the available access/shortcut/whatever keys. Anecdotal evidence would suggest that the smart everyday user knows less than half a dozen and that most know none at all. And certainly no one uses them to navigate 5 levels down to the Proxy Settings of their browser. Yes, there have been attempts to streamline them but that again ignored the basic question of “do we need them” or “how many do we need”? In any cases, the attempts have been as successful as moves to standardise electric plugs or convince China that the rule of law is a good thing.

So what does this have to do with localization and my headscratching? Well, unfortunately no one bother either to automate the process. Which means that when you localize software, you have to manually add them. Now if the localization process in general was smarter, then that might sort of work but remember that when localizing software, from Microsoft to LibreOffice, what you essentially get is a table with one language in one column and your translation in another. Certainly no visual context. And usually no info which tells you anything about the scope (as in, which of these appear next to each other). So you’re faced with something like this:

&Fry
Deep fr&y
Stir-&fry
Stea&m
&Boil
Sa&utee
&Oven-roast

And it’s left to you to figure it out. In the above, your guess is as good as mine whether those all appear in the same menu or in two different ones (perhaps the first 3 in a Fry menu and the last in an Other menu). Oh, and did I mention that they don’t even agree on the symbol? In some, it’s &Fry (which gives the end user the line under the Fry), in others you have to to ~Fry and… oh, you get the idea.

So to a half-baked idea, we’ve added a haphazard localization process. Great. Oh, did I mention the guidelines? The ones which say you should put lines under letters with descenders (something dropping down like gjpqy)? Which is usually fine in English with its 26 letters. But Gaelic only has 18 guys, 16 if I have to cut out letters with descenders and even less if I’m instructed not to use thin letters (like ilf). Do I look like the Brahan Seer? I won’t even start on the difficulties that arise in locating said string when you see a wrong access key on screen in testing.

I did take the time to make sure that the most visible ones don’t overlap in programs like LibreOffice and Firefox. But several layers down, to be honest, I can’t be bothered. So I had to remind myself that the nice person who filed a bug on my behalf with a list of them for some several-layers-down-menu-about-frigging-proxies that they’re not responsible for the general mess that are access keys and not to bite their head off. In the end, I did post a condensed version of my reasoning – the fact they’re mostly pointless and as a result, that I don’t have the time and manpower to fix something which the translator shouldn’t have to fix in the first place.

Honestly, don’t people ever take a step back and think?

Sir, I don’t know where your ship learned to communicate…

07/03/2012 2 comments

But, dear 3PO, at least it’s making an attempt. Which is more than can be said for certain organisations, in spite of the recognized importance of good communication. Let me take you back a few years…

Most developers will already know what I mean with the OpenOffice to LibreOffice fork. For the rest of humanity, the short version goes something like this: StarOffice (then owned by Sun Microsystems) is made Open Source back in 2000 to counter the dominance of Microsoft Office. Called it OpenOffice and hey presto, there was a free alternative to Microsoft Office. Then Oracle buys Sun Microsystems and suddently trouble is afoot. I won’t even bother to try and dissect who did what to whom and who was right or wrong. The short of it is, most of the community upped sticks, took a “copy” of the then version of OpenOffice and set up a rival project called LibreOffice. That’s what’s called a “fork”.

Oracle then divests itself of OpenOffice by donating it to the Apache Foundation but that’s not so relevant here.

What IS relevant is they way in which this was all done. From the early days of OpenOffice, there were lots of translation teams, including a lot of “small” languages because normally languages like Tibetan, Bodo and Oromo don’t even get a look in the door of proprietary software houses. So they put in lot of time and effort into translating OpenOffice into their languages. To do so, they use an online tool called Pootle. Like a big set of tables with stuff to be translated on the one side and your translations on the other and remembers bits you already translated. It obviously gets more complicated than that but that’s Pootle (or indeed any translation memory) in a nutshell. Ok, neat.

I joined OpenOffice late in 2010 when I got fed up with the Gaelic translation of OpenOffice having fallen several releases behind. Although technically legal under the OpenOffice license, the people who had been paid to translate OpenOffice into Gaelic (a company called Cànan) did what’s normally frowned upon – instead of sharing the translated strings with the Pootle server of OpenOffice, they built their own installation packages and distributed those from their own servers and via LTS (today called Education Scotland). In essence, sitting on the translations like a mother hen on its eggs. Your guess is as good as mine as to why. Anyway, the upshot was that I had to start from scratch again. Given I also used the chance to ensure the terminology aligned with the rest of the Gaelic software universe, perhaps not a bad thing but a lot of unnecessary work nonetheless. But regular sleep is for wimps.

So, there I was steaming ahead, when rumours of this new LibreOffice project reached me. To cover all bases, I also sign up for that project on the recommendation of a friend. But I continued my translation over on OpenOffice as I’d already started there. I reach the 2/3 mark when suddenly, the OpenOffice Pootle server goes dead on a Friday or Saturday I think it was. Not to worry, it’s probably just a glitch I tell myself. Yeah right. It never came to life again. Ever. No matter how many emails I posted to the mailing lists, nothing. Not even a response. Neither on the lists nor, thinking it might be more diplomatic, off the lists.

Luckily, I had just taken a backup the day before. Very lucky indeed, a total fluke as I’m not normally that regular in making backups of stuff I figure other folk are backing up. So I did manage to migrate over to LibreOffice fairly unscathed but nonetheless scathing. If not for myself, for the other teams who may not have been so lucky. And counting in at about 100,000 words, it’s not just a piece of cake doing that from scratch.

But wait, it gets better… Oracle donated the whole project to Apache, remember? Well, Apache are still trying to figure out what the most recent set of translations are and how to get the whole thing up and running again.

More than a year on, I’m still seething about it (as you may have guessed). Perhaps the developer mailing lists were all abuzz with the impending shutdown. But most translators don’t follow the development lists, there’s only so much mail an inbox can take. I don’t know. All I know is that on the translation lists, no-one warned about the shutdown. Which would have been – at the very least – the decent thing to do because whatever storms were brewing on the development side of OpenOffice, the translation list was concerned with its main aim – translation, not politics.

And to my knowledge, no one bothered to tell the users anything either. Not for a very long time anyway. All they noticed was that stuff wasn’t working as it should any more, like the extensions site which kept going offline.

Localization is often seen as an afterthought to “the real work” and while I don’t agree, that’s just the way it is. Fine. But 100 hours of a translator’s lifetime are just as important as 100 hours of developer lifetime. Loosing that tends to make translators a bit tetchy. It perhaps comes as no surprise that the majority of translators have decamped to LibreOffice and look set to stay there, even if Apache OpenOffice comes back on stream.

Which, especially in the Open Source world where volunteers (remember, they volunteer, they’re not serfs) have the choice of going somewhere else, tells you one thing – if there’s big stuff afoot, it pays to communicate this to the folk who might not be in the midst of the firestorm but who are involved nonetheless.

As if by magic…

27/02/2012 2 comments

We all know the feeling… software doing something that’s totally counter-intuitive, driving us mad in the process. Here, I can’t decide what’s worse, not doing user testing (or doing it badly, as in, leave it to IT people to test) or not listening to your users. Which of course applies to both open source and proprietary software.

Case in point, LibreOffice (the former OpenOffice). Yes, I’m the localizer for Gaelic there. Yes, I think it’s a really great project and really great software package and yes, I can’t see why schools and government are paying Microsoft money for their products which are getting more complicated by the day (yes, I hate the Ribbon). So what’s my bone? It’s the installation process for new users, oddly enough.

Now LibreOffice comes in over a hundred languages, including languages like Oromo, Tibetan and Ndebele who’d normally have a fight to get into propriety software. Fantastic. So what does an interested user do? Well, they go to the site, select their operating system (great, Windows, Linux and MacOS), select their language, download, install (puzzling a little over the install menu being in English but hey, maybe there’s a technical issue with that), write a letter to their granny in Oromo to tell them about this great thing they now have on their computer. Errr… let’s backtrack to step 2, selecting your language. I’m not sure what was being smoked in the room when the download and install process was designed but here’s what actually happens.

You actually download a fairly hefty file which contains the translated interfaces for all languages plus all spellcheckers and grammar proofing tools that teams have bundled with LibreOffice. Bit of a bugger if you’re on a slow connection folks… You then install and you reach a point where you have to select Typical or Custom installation. Now assuming a “normal” user who can’t program in C++ and writes regex to solve his breakfast sudoku, you choose Typical. You complete the process and open LibreOffice – in English. At this point, wtf comes to your lips in whatever (and possibly all) languages you are most fluent in. You start rooting around in the gubbins, pardon, the Options but yours isn’t there.

At this point you either persevere and eventually get the right answer or, in most cases, you give up cause who wants to bother with software that’s complicated when you’re installing it, never mind how simple it is when using it?? Now what is the right answer, your rightfully wondering? Duh, obviously you have to select Custom (never mind that to this point most people are under the impression they’ve just downloaded their own language), then go to Additional language packs and click to expand the menu, unselect 3 types of English and select your language, then move on and hey presto. Oh, did I mention that whichever path through this you pick, you still have the proofing tools for all languages installed, making it a real pain to find the one you’re actually using.

Yes, I’m shaking my head too. True, they may have inherited this from Oracle’s OpenOffice when they split (forked, as they’ll say). But we’re now several releases down the line and it’s still as insane as ever. Maybe someone like Microsoft can afford to piss off users but a recent splinter of an open source office suite which is trying to make it big?

Ok, so the current process allows you to select more than 1 language for your interface which then allows to to switch but for heaven’s sake guys, there are better ways of doing that… like downloading a new language pack from the web if you choose to add Welsh to your Zulu interface.

Projects like LibreOffice in my view can’t afford to let easy of use for the end user fall behind, even if developing something that shows the time in the Mayan Long Count just sounds like so much more fun than making sure the download and install process runs as smoothly as possible with a minimum of head-scratching cause somewhere down the line you’re either losing customers or someone has to provide a load of unnecessary support.

Detect locale – manna from heaven or hellspawn?

15/12/2011 6 comments

It seems like a good idea, doesn’t it? Web 2.0 and all that, increasingly intelligent software taking the task of selecting your language when you visit a page.

Based on? Aye right, there’s a catch. Based on one of two things – the language of your browser or the preferred languages you can set in most browsers. Those of you who are normal end users have probably already spotted the problem. That’s all those of you (the majority) who went “I can do that?”. Yes, you can, but for the vast majority of people, what I shall call the “Install and Hope” group, that’s both news and several steps too technical. And before you tut at that degree of inability to tweak software, that probably includes your mum and dad, your aunts, uncles, grannies and gramfers. They’re not stupid people on the whole.

While actually the more intelligent choice because you can select from a relatively wide range of languages, this still limits you immensely. There are some 6,000 languages on the planet and even multilingual Firefox only offers maybe 200 or so in the language dropdown. And many of those are languages like Chilean Spanish, European Spanish, Argentine Spanish – what the codemasters call locales. So what are you supposed to do if you’re a speaker of one of those 5,800 NOT on the list?

But here’s the thing that drives me and many other speakers outside the club of the 25 biggest languages insane. Most sites these days take the lazy approach and just base it on the language of your browser (most websites), or worse, your operating system (Linux and most mobile phones). In the words of Julia Roberts “mistake, big mistake”. Why? Because there are far fewer browser localizations than languages. And even if there is a browser localization in your language, that doesn’t mean everyone is using it.

Let’s take your average family on the planet which is – believe it or not – bi- or multilingual. That usually means that between the parents and kids at least two languages are used. Sometimes even more. But not everyone usually speaks both languages but, and this is where it gets tricky, they will often share a computer. Say we have a bilingual English-Kurdish family with a Kurdish-speaking father and a wife and kids who only speak English. The default language on the computer will most likely be English because in most cases, you can’t install the same browser more than once and few offer you an easy way of switching the language. So the browser is in English. But for the sake of argument, say the father wants to blog in Kurdish using the Kurdish version of WordPress. He goes to the main page and looks for a list of localizations. Tough luck, there isn’t one, because WordPress.org relies on your browser language settings. So he downloads the English version even though there IS a Kurdish version, it’s just not obvious because you have to go to http://ku.wordpress.org/

It could be worse – they could be using Ubuntu. True, it’s become more user friendly but who designed that insane bit of forcing the locale? Let’s say Azo, the father, wants to install some other software only he will use in Kurdish. What are you supposed to do in Ubuntu? Right, you go to the Software Centre. Only problem is, someone again figured that tying the language of any software you download to the OS language is a bright idea. Not for those of us who aren’t monolingual. And no, suggesting that Azo goes to the address bar, types about:config, types matchOS and toggles to false and then selects ku in general.useragent.locale is NOT a solution.

This dance gets even more insane. Let’s say Azo would like to use the Kurdish version of a mobile browser on his phone at least, since he can’t get Kurdish WordPress. Unfortunately, he uses an Android phone. Meaning? Well, while there IS a localization in Kurdish, the language of his Android phone is English, so the phone assumes this person could only ever wish to use stuff in English and forces every installation to English. End of.

Proprietary or OpenSource, same difference, Linux or Android, Windows or Mac, language selection is getting more and more difficult these days in spite of a legion of volunteers who strive to localize stuff into their languages. For free, usually.

So the question is, why, if there are localizations available for stuff, do you guys make it SO hard to get them? Isn’t that like baking a beautiful cake and then hiding it in the basement, assuming that everyone knows that’s where you hide the cakes?