Caroline Howard was “interested to know exactly how Durkheim's anthropological research challenged the idea of logic as absolute. What kind of research did he do?”
Durkheim and Mauss primarily relied on ethnographic reports and anthropological texts. This was standard in early sociology; for example, Mauss used the same method to write his famous essay “The Gift” (1925). In the 1890s, they were looking at works like Spencer and Gillen’s “The Native Tribes of Central Australia” (1899), and first-hand accounts of Europeans who spoke with Aboriginal Australians. I’ll say more about this next week.
Caroline later asked another great question, which gets at the heart of the matter: “Is it your thesis that the process of collection and division is entirely socialized? Or that the role of socialisation has been under acknowledged?”
My answer is “No, collection and division is not entirely socialized, but the skill is extremely enhanced by cultural learning.”
Let’s take an example of an enhancement like this: cooking. While it builds on basic animal predilections for high-calorie foods, human cooking involves intricate cultural systems for categorizing ingredients, techniques, and cuisines. We've gone far beyond simple distinctions like “edible” versus “inedible,” which are categories we can see cats and dogs working out, into elaborate taxonomies of taste, preparation methods, and cultural signifiers.
I think that we share with other animals a tendency to see things in “kinds.” An animal, like a human, may see another animal as “prey” or another animal as “predator,” for example. This is a conceptual and categorical way of seeing. It starts with sensory input, but quickly replaces the sensed experience with a categorical judgement, mostly unconsciously.1
But just because we “sense in kinds” does not necessarily mean that “kinds really exist.” The simplest explanation for the experience of kinds is that nature comes pre-divided into kinds, and we simply sense what’s already there. This view — that we directly perceive real categories in nature — is called “naïve realism.” Though common in everyday thinking, philosophers since Hume’s Treatise of Human Nature (1739) have found it difficult to defend.
Why is this? Let’s look at a concrete example. In Southern California, where I’m from, housecats are predators when it comes to mice and birds. But they are prey to coyotes. And most people see cats as neither predator nor prey (“for us”). So what exactly are these categories of “predator” and “prey”? They only make sense from a particular perspective and context.
These categories must feel absolutely real to the housecat: the bird is obviously prey, the coyote is obviously a predator. But do these categories exist in the abstract, outside of any perspective? That is harder to defend. While predator-prey relationships might illuminate dynamics across different pairings of species, they’re not fixed categories which are “out there” and of which the species are simple instantiations. And if we think about this evolutionarily, categories like “predator” and “prey” simply must precede more abstract categories like “true” and “false” or “good” and “evil.”
This raises a deeper question: if we can only know things through experience, and our experience already comes unconsciously organized into categories, how can we ever know whether these categories are features of reality or just a function of how we perceive? Problems like this one were what bothered philosophers like Hume and Kant. It’s a bit like trying to tell whether you’re wearing rose-tinted glasses if you’re never able to take them off.
Such philosophical puzzles have practical implications, for example in how we educate children. Humans, like housecats, see in kinds. This seems to be basic to perception. But human culture massively amplifies this tendency in ways that transform our relationship with categories entirely. Sometimes the process even transforms the categories in irreversible ways that make it hard to see how we used to see. Through deep childhood training and education, we learn to see the world through increasingly abstract frameworks.
The work of Soviet psychologists Lev Vygotstky (1896–1934) and Alexander Luria (1902–1977) demonstrated at the individual level what the sociologists Durkheim and Mauss showed at the cultural level: that most of our ability for collection and division is learned, not innate. Because logic depends on categories, this means logic is also learned and has a history, as I argued in “The Birth of Logic.”
Vygotsky and Luria identified two broad approaches to classification: a “graphic-functional” way that stays close to direct experience, and an “abstract-logical” one that operates more theoretically. Adults who haven't learned the latter as children can still acquire it with a few years of training, but they initially rely on the former.2
How does this occur in education? Consider how we learn the concept “cow.” A child would first have encountered specific cows — real animals in their environment. They might later meet more cows, and learn that they are also labelled with the term “cow.”3 But contrary to what we might assume, children don’t naturally “see features” and group things accordingly. The ability to notice “common features” itself has to be trained through repeated trials, as Vygotsky’s experimental work showed. It is my view that toy and cartoon cows, which strongly emphasize the features adults care about, massively intensify this tendency to pay attention to features. It is therefore adults who guide children in what count as relevant similarities. What come to seem to us like “obvious” shared features between cows are actually the result of extensive cultural training in how to see and how to categorize.
Luria's research in 1930s Uzbekistan showed how profound this training is: many illiterate adults classified things quite differently, staying closer to direct experience rather than abstract features. Fascinatingly, they often objected to logical premises. When presented with the syllogism, “All bears north of latitude Y are white” and “Town X is north of Y,” followed by “If you see a bear in Town X, what colour is it?” they would respond something like: “I don’t know — did you see this bear? How do you know all bears above a certain latitude are white? Have you been there?”
They were extremely reluctant to abstract away from experience, especially on the word of someone they did not yet trust. Those with no Soviet education would refuse to complete the syllogism; those with two years training vacillated between accepting and rejecting the syllogism’s premises; and those with more years of education tended to complete the syllogism without complaint. What seems like “natural” feature recognition to us is actually a highly trained way of seeing.
There are advantages and disadvantages to sticking more close to perception and to the word of those with first-hand experience. Obviously it makes it harder to do syllogisms. But arguably, syllogisms are never useful in their lives.4 And trusting abstract categorizations from strangers may make it easier to be deceived.
The painful development of our basic categorical abilities through culture reaches its height in philosophical systems like Plato's method of collection and division — where things are first grouped by similarities, then divided by their differences. This is an extremely late and highly socialized stage of categorization. Though it builds on basic animal capacities, it transforms them into something radically new.
Kant came to believe that certain “forms of intuition” (like space and time) and “categories” (like causality) must be built into our minds before we can have experience at all. But the research of Durkheim, Mauss, Vygotsky, and Luria suggests something different: these apparently "natural" ways of organizing experience are actually taught to us through culture. This is why Durkheim's work has been called “sociologizing Kant” — he shows how even our most basic categories emerge from social life rather than being hardwired into individual minds.
Note that, according to my argument, the formation of rigid categorization, which is an aspect of the foundations of logic, was first social (Socrates’ conversational method), then individual (Plato’s collection and division), then came to seem a basic theoretical principle, perhaps even innate (in parts of Aristotle).
Interestingly, the problems in the foundations of logic were, in some sense, also first social (Durkheim and Mauss), then individual (Luria and Vygotsky), and finally at the basic theoretical level (arguably, in the line from Frege and Russell, to Gödel and Turing, to Wittgenstein, Quine, and Kuhn).
As always, I’d love to receive your thoughts and questions in the comments.
Bryan
In The Gay Science §111 (1882), Nietzsche made a similar Darwinian argument, that seeing in categories must have had survival value, since those who could quickly categorize “predators” (rather than as a unique individual experience) would be more likely to survive. And I think Von Helmholtz (1821–1894) provides evidence that most of this type of inference is unconscious.
I suspect there are many, many more ways than the two that Vygotsky and Luria identify. However, their basic distinction between “reasoning which sticks close to perception” and “reasoning which veers far from perception” does seem to capture something important. Schopenhauer observed a similar division between intuitive and abstract knowledge, and athropologist Lucien Lévy-Bruhl (1857–1939) identified a related pattern, though his case requires more careful treatment than we have space for here.
I’m trying not to fall into the trap that Wittgenstein observes St. Augustine falling into at the start of the Philosophical Investigations. The trap is the idea that we learn words by associating sounds with objective categories. Wittgenstein shows at a theoretical level what Vygotsky showed at a practical level; that it can’t actually work this way. But for the purposes of this point about abstraction, we can leave aside how exactly words are learned.
Schopenhauer has something to say about this. If you’re interested, remind me in the comments and I’ll find it.
"But do these categories exist in the abstract, outside of any perspective? That is harder to defend." - what would be the benefit of attempting to defend this? My intuition would be that it's something we will never be able to prove, almost like the existence of god. On this my guess is we have limited info/perspective, are in Plato's cave and always will be.
I have many others questions but will save them for irl bc otherwise it will become too tiresome to answer here. But thank you, this was a treat for me!
Thank you for responding to my questions, Bryan. Interesting to read about the Uzbeks in Luria's research being not only unwilling to complete the syllogism but also sceptical about statements presented as fact. You also make a good point about children's toys and cartoons having grossly exaggerated features.