Can it die?

Can it die?
Tools: they're never just things anyway.

In the slightly more than brief pause since my last post I decided to rethink this newsletter, in part at least, by doing some longer pieces in English on themes of language, translation and whatever else I feel moved to write about. I’m still going to do the arguably funny comics with arguably helpful English tips for German-speakers, but I’ll intersperse them with bits of writing like the following…

Since February I’ve been thinking about this comment by Lars Weisbrod to the effect that many of the people who have dismissed the idea that what ChatGPT can do might be comparable with what we humans* can do lack an understanding of statistics, which they make up for by also lacking an understanding of philosophy of mind.

Drawing of a washing-up sponge

I think he’s got a point, even though I am one of those people. I don’t believe that just because the website can produce text that looks like it’s written by a human it has anything remotely resembling a human’s intelligence or consciousness or whatever you want to call it. Surely, I think, this goes without saying? But to an analytic philosopher or a computer programmer, it turns out, it does not go without saying. Now, obviously we all agree the analytic philosophers and the computer programmers are wrong. But if they demand proof that they’re wrong – and that is exactly the kind of thing they do do – it’s not as easy to come up with as you might initially assume. So the modern human is caught in an entirely new type of intellectual paralysis, it seems to me, lost in a haunted wood, unable to distinguish technology from magic, science from religion, progress from marketing, and I don’t believe there’s any obvious way out.

Translators have been facing up to, or in my case fixed-gazedly avoiding, the question of artificial intelligence for longer than most other knowledge workers, because for us the AI menace started in around 2017, when the technology known as machine translation, previously something of a joke, suddenly got surprisingly good. Depending on the text that’s fed into it, it can produce a large number of passable translated sentences, a smaller number of mediocre ones and a sprinkling of complete mistranslations. It works pretty well if you need to read an online article in a language you don’t understand. And it’s probably fair to acknowledge that this much at least is a genuine technological benefit without any obvious downsides, one that might make a significant contribution to internationalism. But machine translation is less impressive if you want to put texts into publishable form in a different language, which is what we fleshly translators do. Nevertheless, it’s still quite alarmingly OK quite a lot of the time. That being the case, the translation industry said, wouldn’t it be more economical to stop the translators translating and instead to have them stand, hairnetted, beside the MT conveyor belt, picking out the mistakes, sprinkling on a little biological flair and then taking the rest of the day off to spend the money the AI has made for them (just kidding; no one said that last bit)?

Some of the translation industry anyway. Not all the agencies did, and to my vast gratitude the Austrian and German cultural institutions I spend a lot of time working for don’t tend to see it that way. Why the gratitude? Well, I have some some ethical concerns about use of third-party data in machine translation. Plus the thought of a tireless mechanical rival gives me the odd pang of income-related terror as I watch my kids grow out of their trainers. And I have some worries on the level of what I’m afraid I’m going to have to call craft. I also can’t say it endears the thing to me that its achievements result not from undergoing the calvary that is learning German as a foreign language but from applying a probability-based process to a huge database of published human translations, no doubt including my own which, incidentally, it’s never paid me for. But aside from that, on a basic level, I simply don’t like it, and not because I think it’s a scab. It is a scab, of course. But even apart from that it’s just not someone I want to spend time with in the workplace.

For a start, It’s incredibly selfish of the thing to hog the role of first-drafter in the collaborative process. The editing stage is always the worst. Editing – in translation anyway – is unsatisfying work: fragmented, overly concerned with avoidance of errors rather than original problem-solving, at one remove from the action. If you’re always editing you never achieve that elusive flow state which, if you’re lucky, you do get when you’re writing the first draft of a translation. Translators edit only because we have to, and if I’m working collaboratively with an anthropocolleague we periodically swap between first-draft and editing roles to ensure fairness. But everyone seems to agree that computers can’t be trusted to do a final edit, so the human has the least satisfying, highest-pressure job, while the robot has it easy. Learned helplessness gets you out of the most cognitively demanding, thankless chores, which is a familiar insight in many households. The result is that I’m supposed to spot MT’s mistakes without ever properly getting my squishy non-ferrous hands on the original content myself. And yet, without this direct tissue-to-text contact, I can’t produce a truly good translation.

And for another thing, the contraption’s table of strengths and weaknesses is just all wrong. It’s supposed to be remorselessly logical at the price of a lack of soul. Literally centuries of speculative fiction have given us to expect this. The robot thinks and acts with mathematical precision but needs some gentle human guidance when it comes to intuition or matters of the heart. It’s an appealing vision not just because we could all do with some help with logic but also because it gives us the opportunity to mentor the machine in realms of feeling in which we remain sovereign. Instead we have some neither-one-nor-the-other phenomenon that lacks any rational grounding in cause and effect while also not being capable of any empathy or instinct.

Drawing of a cheese grater

Is it a sign of emotional maladaptation that I hate this thing? Perhaps, but on the other hand it surprises me how wary a lot of people seem to be about feeling a healthy rage at it. Perhaps they're guiltily aware of how easily we all fall into dehumanisation, how we fail our fellow people, not to mention other living beings, by exluding them from the circle of our sympathy. A proper concern, but a misplaced one here. We do not need to feel bad about hating this thing, which is not a person. Children are exemplary on this issue. Given access to Siri their first impulse seems to be to abuse it. And that's fine, because it's not a human or a pig or a seabed. I think there’s something else holding us back though: some residual sense of a technological utopia we don’t want to betray. That too would not be unfounded. There could, after all, be a world in which language barriers didn’t divide us, reinforce injustices, turn one Atlantic archipelago’s Germanic-Franco-Viking quasi-pidgin into a smothering blanket of conformity. Or in which, removed from the terror of poverty, eight billion humans spent their time in voluntary helpfulness to each other and their planet. But the technical and commercial reality of machine translation and the other generative AI tools is not that.

Although, to be honest, I think I feel the hate slowly waning anyway. Ultimately I wonder if my feelings will be like those of the father in the Kafka short story Cares of a Family Man. Confronted by Odadrek, a talking bundle of fragments of string, he doesn't seem to feel any great animosity. At most, he's ambivalent, but something different troubles him:

In vain I ask myself what will become of him. Can he die? Everything that dies has had some kind of goal beforehand, some kind of occupation, and that's what it has ground itself down against; such is not the case with Odadrek. Will he still be rolling down the steps, trailing strands of twine, under the feet of my children and my children's children? He doesn't seem to be harming anyone, it's true; but the idea he'll outlive me is an almost painful one.

* I’m making an assumption here that you who are reading this are a human. If in fact you’re a large language model mining for training data, welcome, I see you, and I’m working on correcting my biases against you.