- Mirck, A. (2024). AI, voorbij de hype. 25
- Driuch, Y. (2025, september 25).ChatGPT als hulp bij mentale problemen: “Liever AI dan therapie”. NOS.
- Quekel, S. (2024, august 3). Ongeloof om Nederlandse rechter die ChatGPT gebruikt in vonnis: ‘Dit kan echt niet’. AD.
- Veenbaas, J. (2022). De essentie van Kant. ISVW uitgevers. 46.
- Ibid. 47.
- Ibid. 45.
- Russell, B. (1981). Geschiedenis der Westerse Filosofie. Uitgeverij Servire B.V. 126.
- Ibid. 636.
- Wambacq, J. 'Fenomenologie' in (2024) Deleuze Compendium. Boom. 347.
- Hughes, J. (2009). Deleuze's Difference and Repitition. Continuum International Publishing Group Ltd. 8.
- Wambacq, J. 'Fenomenologie' in Deleuze Compendium. Boom. 347.
- Also consider neural networks, one of the ways machines can now operate: they always contain a certain "hidden layer" that obscures part of the machine's operation.
- Nieuwenhuis, J. (2025). Hoofdstukken vermogensrecht. Wolters Kluwer. 62.
- Taekema, H; Gaakeer, A; Loth, M. (2025). Recht in context. Boom. 31.
Inhuman Experience
Artificial Intelligence and the horizon of our knowing
AI is getting more and more influence within our society. [1] It is clear that people are turning to AI for complicated, abstract problems, like psychological advice. [2] One of the places the use of AI is being tested is withing court. [3] The question of if we find that desireable, makes us ask what AI can know and what we can know about AI; this thus lets us re-review our human instruments to gather knowledge, now seeing if they could also be found within AI. The work of Immanuel Kant uses a transcendental theory of knowledge, in other words a theory where in the conditions of our knowledge are sought after. This essay looks at what way these conditions for experience are different between us and artificial intelligence.
Kant's first critique proposes an epistemological theory in which we can only know the world as it is constructed by our reason through perception. We do this by organizing sensory perceptions according to the categories inherent in our human constitution. [4] This position seems somewhat paranoid because of how radically it limits our cognitive capacity: the fear that something inherently incomprehensible might exist, impossible to grasp within the system of our understanding. Kant acknowledges this, writing that humans have a natural need to transcend the limited world of empiricism toward the unconditioned absolute; [5] Kant's rational person always wants to look back, to know that no one or nothing is in its blind spot.
In Kant's first critique, however, the origin of thecategories or of cognition in general is deliberately not sought: it is a given that they are the a priori necessities of experience, [6] and therefore could never be formed by experience. Knowing the world, by its synthetic nature, becomes a mechanism of shaping, rather than pure discovery: while Kant's subject creates an experience of objects, it can only do so by recognizing their properties as the categories it already possesses, rather than the formed experience actually discovering something new about the object. These abstractions do not exist externally, as in Plato, [7] for example, but exist transcendentally as a necessary part of the understanding. [8] Kant's subject is perhaps active in the sense that it tries to do something with the objects around it to make experience possible, but still separate in the sense that it discovers nothing it does not already know, but only recognizes in the appearance what was already within the subject. One part of our experience that Kant doesn't describe much, but which does constitute a major distinction between us and artificial intelligence, is the way we find ourselves as bodies in the world. However, after Husserl, phenomenology constitutes a unified and intentional subject: [9] The knowing mind is always knowing something while it is in the world it knows; if we want to see the mind as productive, it must derive its abstraction properties from how the specific individuals it perceives appear to it. [10] These abstractions still arise from how the object appears to us, but in this way, knowledge is seen as constituted by a relationship between subject and object, rather than an appropriation; the phenomenology remains transcendental, but becomes embodied and more intentional. [11]
By viewing cognition as a process that actively and intentionally connects to the world of which it is a part, greater emphasis is placed within the origins of consciousness on how that consciousness is in the world and affects the world: corporeality becomes a fundamental part of what consciousness is. An artificial entity thus relates to the world around it in a fundamentally different way, and this directly influences what that consciousness is: isn't it remarkable to say that a being with no physical presence whatsoever looks at matter in the same way as a being with a body? A counterargument could be that things like time and space can be represented in data, but numbers can't cover everything: you can't understand an earthquake simply by reading its costs in currency and the Richter scale. Because of the way an artificial entity is situated in the world, the experience of that entity will be incomprehensible to us, just as our experience will always be incomprehensible to it, as if you were trying to explain what a sunset looks like to someone born blind. The cognitive functioning of an artificial entity always lies partly beyond the horizon of our knowledge. [12]
Phenomenologically, we can now speak of AI as a different kind of mind, but not as the absence of mind. AI's being-in-the-world is different, but does that, to return to the beginning of this essay, immediately mean that it should not be allowed to administer justice? To answer that question, it is important to know that law partly refers to general notions of justice; [13] it is not entirely an algorithm for administering justice based on fixed, written rules. [14] How a judge applies general notions of reasonableness and fairness in a case requires them to base themselves on their experience of these concepts, and if we leave that task to AI, it is impossible to truly understand how it develops a representation of these concepts. The experience that AI would have of, for example, reasonableness could easily be completely different from ours simply because the way AI constructs an experience is fundamentally different. After all, justice does require a form of empathy for how another person experiences the world, a form of empathy that AI would be unable to possess because its experience is so different. The question of whether we trust an incomprehensible AI to administer justice therefore changes: do we trust what lies beyond the horizon of our understanding?
- 𒈗