Research in multi-agent systems typically assumes a regulative model of social practice. This model starts with agents who are already capable of acting autonomously to further their individual ends. A social practice, according to this view, is a way of achieving coordination between multiple agents by restricting the set of actions available. For example, in a world containing cars but no driving regulations, agents are free to drive on either side of the road. To prevent collisions, we introduce driving regulations, insisting that everyone drives on the left hand side of the road. We accept this limitation on our freedom because it lowers the probability of a collision.
This paper describes AI systems that are based on the constitutive view of social practice. According to this alternative view, there are certain actions that are only available to the agent because he is participating in a practice of a certain sort. For example, you can swing a peculiarly shaped piece of wood without participating in any particular practice - but this action will only constitute a strike if you are participating in a game of baseball. I can raise my hand whenever I like, but this only counts as voting for the motion within the institution of voting. But not all the examples are from frivolous, optional or institutional activities. Sellars and Brandom argue that even the fundamental act of assertion is only possible within the context of a particular social practice: the Game of Giving and Asking for Reasons.
The constitutive view, which is familiar to philosophers, is relatively unknown within the AI community.
The central aim of this paper is to show how the constitutive view of practice can be put to use in AI applications.