As H&M Group’s Head of Responsible AI & Data, Linda Leopold is there to make sure their use of artificial intelligence is responsible. So what could go wrong?
Photography KIMBERLY IHRE • Interview ROLAND-PHILIPPE KRETZSCHMAR
What does “responsible AI” mean?
— Like all technology, AI can be used for good or bad purposes. I think most companies want to do good. But the tricky thing is that even if you have the best intentions, things can still go wrong, as you’re outsourcing decision making to an algorithm that you can’t fully control. Most AI today is based on machine learning, using historical data to learn, and there can be a lot of bias in that data. AI systems have a tendency to reproduce and amplify existing prejudice and inequalities in society — if not handled properly.
— One concrete example; if you use AI in recruitment to scan through CV:s, and you operate in a male dominated industry, the AI system might then deselect women as potential recruits because of how it looks for patterns in historical data.
What kind of bias could you have at H&M?
— We use AI throughout our value chain for aligning supply and demand, to predict what our customers want and love and what we should produce. We look at historical data to predict the future. There isn’t very high risk for bias, but my work is a lot about preparing for a future where we could use AI for other purposes.
”Although tech companies have worked on ethical AI for quite some time, I haven’t seen many examples from the fashion industry.”
What kind of future risks are you mitigating?
— For example, with personalisation, do we want to recommend all types of products to all customers? There is a level of sensitivity in that. But responsible AI is not only about mitigating risks, for us it’s also about using AI as a tool to reach our sustainability goals. If we can better align supply and demand, we can have less transport, less warehousing, and less co2 emissions.
Is this a top secret department of H&M Group and how much do you interact with industry colleagues?
— No, transparency is very important, we collaborate a lot internally and with external partners. However, there are not many industry colleagues as it’s still a quite new area. Although tech companies have worked on ethical AI for quite some time, I haven’t seen many examples from the fashion industry.
For the sake of the readers who might not know what AI is — could you explain the basics?
— The very short answer is, it’s all about pattern recognition. Tasks that computer systems perform that normally require human intelligence. AI today mostly always means machine learning, which is computer systems that learn by themselves; they learn by interacting with the world or analysing historical data. Deep learning is the most fascinating, the most powerful form of machine learning, imitating how neurons in the human brain process information.
From your perspective, have you seen examples where one has mimicked human intelligence?
— I think we’re pretty far away from general intelligence, which is the target state. But there are definitely some mind-blowing examples out there, like the language model GPT-3, which generates very realistic text. As a writer and former journalist, I find this development extremely fascinating, and at the same time frightening. It could have huge implications on news, fake news, content and media in the coming years.
The futurist Ray Kurzweil predicts we will reach general artificial intelligence in 2030, do you agree?
— I can see hints of it with GPT-3, it generalises quite well between different tasks. The basic principles are there, I think, but of course it’s impossible to say when and if we will reach it.
Do you want us to reach it?
— From a curiosity perspective, yes, but from a real world perspective I’m not so sure. It depends on what goals this general AI would have. If it would treat us humans like ants, not understanding our needs. It could be dangerous.
Coming back to the transformation, what does it mean to you?
— Talking specifically about AI, I believe this technology has a huge transformative power, comparable to how the internet has changed the world, or maybe even beyond that. But our conversation in society around AI is still very immature. Powerful and immature — that’s an explosive cocktail. I keep reminding myself that AI may be powerful, but it is not a force of its own (at least not yet…). It is we, humans, who are steering and driving the development of AI. And we have a shared responsibility in getting it right. That’s why we need to include as many as possible in the conversation about how we want to live with AI. So that we can use this transformative power as a force for good.