Our website uses cookies to offer you an ideal browsing experience. Some information is passed on to others (statistics, marketing).
In the popular imagination, artificial intelligence (AI) is usually portrayed as a divine entity that makes “just” and “objective” decisions. Yet AI is anything but intelligent. Rather, it recognises in large amounts of data what it has been trained to recognise. Like a sniffer dog, it finds exactly what it has been taught to look for. In performing this task, it is much more efficient than any human being – but this precisely is also its problem. AI only mirrors or repeats what it has been instructed to reflect. Seen in this light, it may be viewed as a kind of digital “house of mirrors”.
Humans train machines, and these machines are only as good or as bad as the humans who train them. Based on this insight, the publication addresses not only algorithmic bias or discrimination in AI, but also AI-related issues such as hidden human labour, the problem of categorisation and classification – and our ideas and fantasies about AI. It also raises the question whether (and how) it is possible to reclaim agency in this context.