Who are you?
— I’m a professor of Gender and Society at Linköping University in Sweden. In my research, I’ve looked for many years at the way that technology both mirrors and changes our understanding of who we are and what we do. Technology is very good at reproducing social norms but we can also use it to change social norms, Johnson explains, continuing,
— One of the projects that I’m working on right now looks at the new production of synthetic data — the way that AI and machine learning is going to be making data that are supposed to be anonymous and private. We can trade in it, it will be easy to move across borders, between and within companies. These data are not necessarily going to be reproducing the social diversity that they are supposed to be representing. And some of the data will be reproducing biases in the original data which can have significant consequences as the synthetic data are used for commercial or policy decisions. So, together with colleagues from the Technical Faculty at Linköping University, the Division of Gender Studies in Linköping as well as at Chalmers in Gothenburg, we look at how we can actually try to produce metrics for synthetic data to make sure that the synthetic version is reproducing the diversity in different and better ways; kind of tweaking the system to produce better synthetic data.
And do you have any ideas yet of it?
— Not yet. But I think we’re going to get there.
And if you’re a bit more specific on how you will do it, how would you explain it?
— I think one of the main keys to doing it is of course looking at the algorithms, but it’s also looking at the algorithms with lots of different disciplinary eyes. So the project that we’re working together on has social scientists, it has people working from technology, both with image visualization and also with machine learning competencies, computer skills, and math skills. We are engaging in understandings what social diversity means and how power structures in society produce that type of diversity originally together with more statistical understandings of fairness. Thinking through science as a team project, as something that requires people with backgrounds from different perspectives, and listening to all of those voices at the table together is one of the keys to being able to produce technologies that are more responsive to what we want society to be.
And where do you look? In certain industries or certain regions?
— We’re looking quite broadly, of course. But I think one of the really important things to keep in mind when looking at technological development — and this is something that the field of science and technology studies has long examined — is that technology is produced out of particular contexts. It’s what we would call contingent and it has to answer to the unspoken norms, the invisible values of those contexts, but also to the policy and legal frameworks in those different contexts. So, for example, the use of data in the US is regulated differently than it is in Europe and in other parts of the world. And as we move between these different contexts, we have to be really keenly aware of how the policies and regulatory structures impact what sorts of data we have access to and what sorts of data we need to produce in order to use data in those frameworks. But the actual collection of data is done in different ways and is context-dependent, too… This is essential to remember as we reproduce those data.
AI is now a buzzword. If you look at what you study, is it a good thing for things like norms or is it a potential threat?
— I think all technologies have potential benefits and potential threats. If you ask a historian of technological change, you’ll find that there’s always the promise of benefit, but also the threat of changes we don’t want to associate with all new technologies. Maybe accentuating things we don’t want to continue to accentuate, or producing new problems we hadn’t even seen on the horizon. There’s also the promise of being able to use those technologies for good. However, I think the question as we tend to pose it is: ’Will AI do this? Will AI be bad or good? Will it help us or hurt us?’ And I feel that by asking that question, we’re putting the agency on the AI and instead, I would like to be able to reframe the question to ask how we will use AI, how will we be able to benefit from it? How will we be able to make it help us achieve the goals that we see as valuable and how, and who, and whose voices do we need to listen to, to be able to make sure that we use AI well? says Johnson. She adds:
— We have to be responsive to and responsible for the way that these types of technologies and imaginaries — the artefacts and the algorithms themselves, but also our imaginaries around them — are used and engaged by the people that we’re working with, and the people that we live with in the world today. Technology can reproduce the social norms and values of the context it is developed in but we can also use it — and sometimes misuse it — to change the world we live in for the better.
How will you proceed with the research?
— Our current efforts are towards engaging different understandings of fair into the production of synthetic data — as well as working with diverse actors in the synthetic data ecosystem to find ways that ’fair’ can be an integral part of the process.