Interactivist Refutation of Radical Epistemic Skepticism Characteristic of Postmodern Lit-erary Studies and Fiction

– Postmodern fiction and literary theory have endorsed radical skepticism about knowledge, which was partly conditioned by philosophers’ inability to provide a viable epistemology that would resolve the radical skeptical problems. Most recent developments in both fiction and theory have largely ignored the apparently still un-resolved issue and have instead embraced the position of active commitment that presupposes that knowledge is in fact possible. In this text, we address this apparent oversight and present an interactivist ontology of the mind and culture that explains how we can acquire knowledge about the world, and why the postmodern radical skepticism is ungrounded. We argue that once the interactivist theory of cognition as well as ontology of the mind and culture are assumed, the skeptical problems that troubled postmodern thought become non-problems and further pursuit of epistemi-cally evaluable theory of culture (fiction included) gains solid theoretical grounds.

offered by the novel, the reader is challenged to translate these acts of pure seeing [the acts of the two protagonists] into acts of responsible reading and to reconstruct an 'unbroken' depth, where the fragments shored against postmodernism's ruins might once again reconnect" (163). Apparently, putting the possibility of faithful representation apart, the authors (Huber and Funk) believe that by means of artistic form with the reader's cooperation, "a depth beyond the surface of representation" can be accessed in aesthetic experience (154). Though they see their "reconstruction" approach as complementary to metamodernism, the two have much in common. In short, metamodernism tries to find a way out of the postmodern detachment, skepticism and nihilism related to cognitive skepticism, by suggesting that representation of reality (impossible for cognitive skeptics) is not a sine-qua-non condition of the reader gaining insight into values and meanings, of art being "as if" committed.
However, even if (linguistic) skepticism is for the time being gone from the mainstream literary studies and fiction, it has merely been side-stepped, not successfully refuted. It remains unexplained why it makes sense to investigate how texts of culture represent the current social problems. Scholars working in the ideologically committed frameworks seem to presuppose that it is indeed possible to engage with social issues by studying fiction (or narratives in general), effectively ignoring the still unresolved radical skepticism. 2 The same applies to other humanist approaches (such as postclassical narratology, metamodernism or cognitive literary studies) which (contrary to postmodernism) assume that it makes sense to engage in scholarly projects that aim to explore culture (i.e. search for truth about reality).
In this text we want to show why the above approaches are not misguided in their presupposition (that research projects are reasonable); that is, we wish to show why and how radical (linguistic) skepticism is wrong and how it can be overcome. Our argument will make use of the interactivist theory of cognition and in particular the anti-skepticial argument that interactivism offers (Bickhard "The Interactivist Model"; Bickhard & Terveen; Mirski and Bickhard). The fundamental difference between our argument and other attempts to address postmodern skepticism that we find in the literature 3 is that we identify the problem in the foundational assumptions about how the mind works, which are held widely across academic disciplines and are not unique to literary studies. The proposed solution is likewise a fundamental change at the very foundations of how we conceptualize the mind and cognition. This, we believe, is a novel approach as other critiques of postmodern skepticism usually stay within the ambit of the humanities. Incidentally, this is also why a significant part of the paper, though its primary purpose is to defend epistemic realism in literary studies, is devoted to the presentation of the interactivist model. It is important that the reader should understand the model -our primary tool -since we argue that postmodern skepticism has its roots in the foundational ontology of the mind.
2 This is not to imply that postmodern theory was not socially and politically engaged. Within the late 20 th c. literary studies the approaches engaged politically in the defence of various kinds of discriminated minorities, paradoxical as this might appear, have often been informed by the postmodern view of life,. There are thus, for example, non-skeptical feminist critics (such as Elaine Showalter or Susan Gubar) who see literature as "a series of representations of women's lives and experience which can be measured and evaluated against reality" (Barry 124) and skeptical feminist critics (such as Julia Kristeva or Luce Irigaray), inspired by Foucault or Derrida, for whom "the literary text is never primarily a representation of reality, or a reproduction of a personal voice expressing the minutiae of personal experience" (Barry 125). Barry notes a similar kind of division within postcolonial criticism (195ff). In both cases, he suggests the less theoretical approach is more directly related to political action (196), but strictly speaking both are politically committed. 3 One can mention here first of all Intellectual Imposters: Postmodern Philosophers' Abuse of Science by Alan Sokal and Jean Bricmont (1997), Christopher Norris's Against Relativism: Philosophy of Science, Deconstruction, and Critical Theory (1997), but also Ernest Gellner's Postmodernism, Reason and Religion (1992) or the critique repeatedly voiced in various publications by Richard Dawkins and Karl Popper.
Let us briefly discuss the skeptical ideas present in the writings of many theoreticians of postmodernism and in much of postmodern fiction, to clarify the position against which we are arguing. Notably, each nonfiction author we discuss is a first-rate scholar whose theories have contributed to our understanding of postmodern culture, and each of their nonfiction books functions as a widely-used handbook in academic education. Their epistemic skepticism does not diminish the quality of their research. Likewise, the postmodern authors discussed by Patricia Waugh, Slomith Rimmon-Kenan and Linda Hutcheon, like Nabokov, Carter or Banville are first-rate novelists, whose books are often among the finest artistic achievements of the late 20 th c. Our aim is not to find fault with these authors but to exemplify with their work epistemic skepticism. Nota bene, some of these authors explicitly declare this position, others simply assume it, for example, discussing the status of their scholarly work or criteria by which they evaluate competing hypotheses. All of them, inconsistently (as consistent skeptics should suspend their judgement) offer highly valuable interpretations of contemporary fiction.

Radical Epistemic Skepticism: Representation and Truth in Postmodern Theory
Let us begin with the theorists. Brian McHale in his book Postmodernist Fiction explains that the dominant of this convention is ontological, i.e. postmodern books through techniques such as erasure, literalization of metaphors or the foregrounding of style, inquire into the relation between fiction and reality. This theory seems plausible -it helps explain the meaning of many postmodern works. Surprisingly, in the first pages of his book, McHale claims that "the referent of 'postmodernism,' the thing to which the term claims to refer, does not exist" (McHale 4); as he explains There is no postmodernism "out there" in the world any more than there ever was a Renaissance or a romanticism "out there." These are all literary-historical fictions, discursive artifacts constructed either by contemporary readers and writers or retrospectively by literary historians. And since they are discursive constructs rather than real-world objects, it is possible to construct them in a variety of ways. (4) The same applies to theoretical explanations of this cultural phenomenon: "Similarly we can discriminate among constructions of postmodernism, none of them any less 'true' or less fictional than the others, since all of them are finally fictions" (4). They may not be equally valuable, as McHale admits, but the criteria that can help compare them (self-consistency, scope, productiveness and, above all, interest, 4 4ff), do not include any strong criterion of strictly epistemic value (plausibility, adequacy, truth, explanatory power etc.). Firstly, McHale's position seems inconsistent: if there is no postmodernism "out there," to create its theories within a research project is nonsensical. 5 Secondly, the position is incompatible with the rules of any cognitive enterprise, since the theories a scholar produces, according to McHale, are at Enthymema XXXII 2023 / 5 best interesting fictions, i.e. they are not meant to yield any kind of knowledge about reality (cannot be evaluated in terms of their consistency with empirical data).
Patrick O'Neill is the author of a postclassical handbook of narratology titled Fictions of Discourse, in which he introduces, among other things, the idea that the postmodern implied author may be unreliable or that focalization always involves both the narrator and the implied author (and in some texts also a character). Many analyses he proposes (e.g. of compound focalization) constitute in our opinion an important improvement on previous theories. Yet again at the beginning of his book, O'Neill skeptically claims that "narrative theory, the narrative of narrative, is . . . a form of game, played voluntarily or involuntarily, for whatever professional or private reasons, by narrative theorists" (27). "Game," as he explains referring to Bernard Suits, is play with "at least one constitutive rule." This rule (or rules) which forbid(s) the player to use "more efficient in favor of less efficient means" give(s) thereby rise to "unnecessary obstacles" which players try to overcome (28). Already this seems strange as a description of scholarly activity, but the final straw is that the aim of the activity is not truth: "Games, as specifically focused forms of play, do not set out to discover truth" (28). This of course applies also to narrative theory, but does not diminish either its importance or possibility of practical applications (28). As O'Neill explains, explanations are always potentially more about themselves and the consistency of their own workings than about the subject-matter they ostensibly aim to clarify. . . . Narratology is ultimately about narratology, just as all theory is ultimately self-reflexive. This does not mean, of course, that narratology or any other form of theory cannot ever be used towards non-selfreflexive ends − theoretical physics, after all, resulted in the atomic bomb and men on the moon among other entirely practical results. (31) There is namely a difference between pure theory (play) and applied theory which "as work, is primarily about its object of investigation" (32). In other words, theories are games, concerned with themselves, not truth. Strangely, they can have practical applications which work (cf. the atom bomb). O'Neill seems to ignore the fact that for the bomb to work (i.e. explode) its construction (and theory on which it is based) need to relate to reality. The bomb works because physicists are not merely "overcoming unnecessary obstacles" when doing their research. Why should O'Neill deny this and undermine the epistemic value of his own theory? Presumably, like McHale, he does not believe that a (literary) theory might connect with empirical data (reality "out there" or its "object of investigation").
In the 2 nd edition of Narrative Fiction (a standard academic handbook of narratology) Shlomith Rimmon-Kenan added chapter 11 "Towards… Afterthoughts, almost twenty years later," in which she critically examines the structuralist project of narratology, part of which is her own book. She admits that narratologists once hoped to build theories that would be "objective," "neutral" and "scientific" (136). Without questioning the epistemic value of narrative theory, she now distances herself from the ideal of scientificity. As she puts it, "it is difficult today to attribute objectivity, neutrality, scientificity to narratology (or to 'the sciences' themselves)" (145ff). Discussing radical poststructuralist critiques of classical narratology, which question ideals of order, objectivity, reliable language and descriptions that are not laden with interpretations and ideologies (138ff), 6 she does not try to confront them. But neither does Enthymema XXXII 2023 / 6 she accept them, defining narratology as theory, i.e. "a self-conscious reflection, a conceptual framework, a set of hypotheses having explanatory power" (146), and seeing this theory as "valuable both in itself and as something that enables a set of analytic procedures which is still generally said to 'work'" (Narrative Fiction 150). Her approach to theory appears to be typically scientific, though the final phrase -"is still generally said to work" − might be taken as camouflaged self-doubt. To re-cap, here again, a very sound and scientific analysis of the structure of narrative fiction is accompanied by apparent uncertainty as regards the very possibility of scientific investigations.
Another work of Rimmon-Kenan, A Glance Beyond Doubt: Narration, Representation, Subjectivity, leads us directly to the possible origin of the trouble. As the author explains, under the influence of Martin Heidegger, Ludwig Wittgenstein, Jacques Derrida, and Jacques Lacan, [g]rave doubts have been cast on the capacity of language to reach − let alone represent − the world. The presumption of the existence of a reality prior to the act of representation has also come under fire. . . . Instead of a thing-in-itself, reality, is now considered an absence, and language replaces, rather than reflecting or even conveying, this absent reality. (8) Representation, having once replaced mimesis, 7 is now itself being replaced by creation, play, textuality, intertextuality, and metatextuality. Play seems particularly attractive as an alternative to representation: Rimmon-Kenan cites Iser, who as "heuristic advantages" of play sees the fact that it "does not have to concern itself with what it might stand for" and "does not have to picture anything outside itself"; apparently play "imposes a shape on what is absent" (11). Other authors Rimmon-Kenan cites claim that literature neither represents, nor creates reality, but simply "produce[s] . . . pure textuality" (11). Another option is intertextuality: "a reference from words to words, or rather from texts to texts"; as Rimmon-Kenan explains, "the concept of 'text' is often expanded to designate the whole world. The world, as a network of signs, becomes a text (or series of texts); intertextuality replaces representation" (12). Finally, metatexts are self-referential texts about the difficulty or impossibility of language or literature reaching the world (12). Rimmon-Kenan reports also on Althusser's and Foucault's take on representation, which they see as "related not to reality but to discursive practices," which in turn are ideological constructs, so that in effect what is "re-presented" (here: repeatedly presented) is ideology, not reality (A Glance Beyond Doubt 16).
To sum up, language and literature seem incapable of representing reality; on the contrary, their presence means reality is absent, while what is represented or referenced is other discourses, ideologies, texts and the like. This does not satisfy the author of A Glance Beyond Doubt, she confesses that she "feel[s] uncomfortable with the complete divorce between representation and reality, between subjectivity and selves" (17). In her book, she opts therefore for access that can be gained through narration at least to the subjectivity engaged in a narrative act, as if this is all that can be saved. Here is how Rimmon-Kenan explains this idea: "the act of narration does not represent the world directly. Rather, it represents modes of representation, possibilities of doubt and credence, in the worlds the characters inhabit," as a result "the interaction [between discursive practices] issues in a gesture of substitution, offering indirect access to a 'world'" (A Glance Beyond Doubt 20). Rimmon-Kenan offers thus an attempt to (1977, emphasized the position of mastery as a position of blindness, the determination to obtain knowledge as a kind of murder, and 'literature' as precisely that which escapes full knowledge and mastery . . ." (Narrative Fiction 138). 7 "An awareness of the nontransparency of language and of its problematic relation to the world has often led to the replacement of mimesis by representation . . . . While denying language the capacity to imitate a nonlinguistic reality, many traditional views of representation still conceive of language and literature as articulations, reproductions of a prior presence" (A Glance Beyond Doubt 7). affirm the (highly indirect) access we have to the world, as if uncertain how otherwise to defend the idea that language connects us with reality, that representation is possible and can be evaluated in epistemic terms.

Radical Epistemic Skepticism: Representation and Truth in Postmodern Fiction
Representation and truth are questioned not only by postmodern literary theorists but also by postmodern novelists. In Nabokov's The Real Life of Sebastian Knight reality cannot be distinguished from texts, the real self of the protagonist from the narrator telling its story. In The Infernal Desire Machines by Carter, reality is apparently constituted by the desires of the powerful. In Banville's Kepler the scientist realizes that his hypotheses fit the world because they (his mind and the world) were created by God in such a way that facilitates the fit (the implied author presumably finds this explanation invalid while remaining skeptical about the scientist's ability to reach reality -the novel is clearly metafictional). In Barnes's Flaubert's Parrot Geoffrey Braithwaite's painful experience of grief after his wife's suicide turns out to be by and large (like himself) culturally constructed.
We move on now to a brief discussion of postmodern fiction, based on the studies of Waugh and Hutcheon. Introducing the concept of metafiction, which in her opinion is the feature of the genre of the novel that is most manifest in, and the key feature of postmodern fiction, Waugh focuses on the new understanding of language and its relation to reality. She notes that language is viewed as "an independent, self-contained system which generates its own 'meanings.'" It follows that "[i]ts relationship to the phenomenal world is highly complex, problematic and regulated by convention. 'Meta' terms, therefore, are required in order to explore the relationship between this arbitrary linguistic system and the world to which it apparently refers." Language is no longer thought to "passively reflect[s] a coherent, meaningful and 'objective' world," instead it participates in constructing (what we perceive as) our "everyday 'reality'" (Metafiction 3). The epistemic position of the metafictionist can, Waugh argues, be explained with reference to Heisenberg's idea that the observer impacts the situation she is observing: However, while Heisenberg believed one could at least describe, if not a picture of nature, then a picture of one's relation to nature, metafiction shows the uncertainty even of this process. How is it possible to "describe" anything? The metafictionist is highly conscious of a basic dilemma: if he or she sets out to "represent" the world, he or she realizes fairly soon that the world, as such, cannot be "represented." In literary fiction it is, in fact, possible only to "represent" the discourses of that world. Yet, if one attempts to analyse a set of linguistic relationships using those same relationships as the instruments of analysis, language soon becomes a "prisonhouse" from which the possibility of escape is remote. (3ff) Roughly, the postmodern writer using the verbal medium can only, according to Waugh, represent verbal reality, in which her own discourse participates. As a matter of fact, as Waugh explains, metafictional writers vary in their estimate of human ability to access the non-verbal world. While some (B. S. Johnson or John Fowles) do allow it, the more radical ones, like Christine Brooke-Rose, Ann Quin or Brigid Brophy, at least in some of their books simply deny it: "[t]o be aware of the sign is . . . to be aware of the absence of that to which it apparently refers and the presence only of relationships with other signs within the text. The novel becomes primarily a world of words, self-consciously a replacement for . . . the everyday world" (57).
In A Poetics of Postmodernism, Hutcheon argues that postmodern fiction tries to combine formal self-awareness with social and political engagement, hence its self-contradictory character. This partly explains the postmodern attitude to representation. Postmodern authors being interested in political and social ideas (rather than cultivating art for art's sake), need to be able to connect with reality, but well aware of the mediation of form (language, ideology, artistic convention) find representation highly problematic. In particular, postmodern writers are interested in the past, or more precisely in how the past is narrated -historiographic metafiction is, according to Hutcheon, the most important genre of the postmodern novel. This "historiographic metafiction -like postmodern painting, sculpture, and photography -inscribes and only then subverts its mimetic engagement with the world. It does not reject it . . . ; nor does it merely accept it . . . . But it does change irrevocably any simple notions of realism or reference by directly confronting the discourse of art with the discourse of history" (20). In chapter 9, devoted to the problem of reference, Hutcheon explains that language and reality were disconnected by formalism of modernist art (141ff), which is consistent with her interpretation of modernism (but not fully convincing: modernism can also be seen as mimetically representing human experience complementing thus the realist project of representing social reality). The other origin of the problem is philosophy: works of Ferdinand de Saussure, Derrida, Wittgenstein and the like: Historiographic metafiction explicitly and even didactically asks the same central questions about the nature of reference that are being asked in many other fields today. Does the linguistic sign refer to an actual object -in literature, history, ordinary language? If it does, what sort of access does this allow us to that actuality? . . . Can any linguistic reference be unmediated and direct?
Hutcheon discusses at length philosophical approaches to language, meaning, reference and sense in works of fiction. But these approaches do not help re-establish the desirable connection. One way of achieving the connection is by reducing reality to discourse, and though some novelists choose this strategy, Hutcheon does not seem satisfied. She enumerates five kinds of reference available to the language of postmodern fiction: "intra-textual reference, self-reference, inter-textual reference, textualized extra-textual reference," and "hermeneutic" reference. Intra-textual reference covers the reference of the language of the work of fiction to fictional reality; self-reference is language referring to itself; intertextual reference covers reference to other texts; textualized extra-textual reference -means reference to extratextual documents -textual traces of the non-textual past; while hermeneutic reference involves the reader through whom in the process of reading words connect with reality (154ff). A reference to extra-textual, empirical reality, however, is not available. Though it is not denied, it cannot be asserted. "The real exists (and existed), but our understanding of it is always conditioned by discourses, by our different ways of talking about it" (A Poetics of Postmodernism 157). In short, while insisting that postmodernism wants to connect language with reality, Hutcheon cannot imagine how such connection could be made.
To sum up, in the works of many literary scholars one can find skeptical attitudes exemplified here by McHale, O'Neill, Rimmon-Kenan and, to a lesser extent, Waugh and Hutcheon. These authors claim that every theory is fiction and cannot be compared with each other in terms of their epistemic value (McHale); theories can otherwise by classified as "games," which among others implies absence of claims to truth (O'Neill); language cannot give us access to reality, thus verbal representation of reality does not seem possible (Hutcheon, Rimmon-Kenan, Waugh), the ideal of scientificity is misconceived (Rimmon-Kenan). What comes to the fore (and in our opinion might be the ground for such and similar skeptical beliefs) is the belief that, being limited to human-made language as an epistemic tool when examining reality that is nonverbal and being unable to compare our representations of reality with reality itself (pure empirical data), we should not naively presume that we can know anything apart from our own creative (in effect, rather than cognitive) efforts. The philosophers whose names the above authors mention include: Paul de Man, Wittgenstein, Derrida, Heidegger, Saussure, and Lacan.
As regards the postmodern novelists, they often choose to create anti-mimetic fiction, no longer desiring to conjure up the illusion that the world of fiction is a real one. On the contrary, they draw the reader's attention to the artificiality of their creations, made of language, narrative conventions, other texts and ideas. This, however, does not mean that any connection with extra-textual, nonverbal reality is in principle broken. It may be broken in some texts, in which representation is replaced by self-reference 8 (alternatively, one might side with Piotr Gutowski's idea that what is represented is the artefact or the process of its creation), but, as argued by Hutcheon, postmodernism in general is not self-referential but intent on showing that mimetic representation is problematic, by both using and undermining it (52ff). McHale, who shares this opinion, 9 adds that the "mimetic" element is to be found in the form, not theme, of the postmodern novel (75, 38ff).

General Background
The present attempt to refute postmodern radical skepticism should be seen against the backdrop of the history of philosophical skepticism and its attempted refutations. The history goes back to antiquity. Though the contemporary radical skepticism differs from its ancient antecedent and the intermediate stages, they are all in principle irrefutable and at the same time pragmatically inconsistent, as argued by Renata Ziemińska. Ziemińska distinguishes three related positions -(radical) skepticism, fallibilism and agnosticism, which she defines as the belief that there is no knowledge, the belief that there is no certain knowledge and the belief that reality cannot be known, respectively (24). Moderate skepticism is identifiable with fallibilism, the most common standpoint among contemporary philosophers (264). Needless to mention, fallibilism does not undermine the rationality of human cognitive enterprises.
Surveying the history of European philosophy, Ziemińska notes how diverse skeptics formulated and justified their ideas and how they tried to prevent skepticism from being selfdefeating. The ancient skeptics, the most important of whom is Sextus Empiricus, argued that because human senses are fallible, because the criterion of truth is unavailable, because proofs are invalid (as they either presume their own conclusions or regress infinitely), and because definitions fail to extend our knowledge, certain knowledge is beyond human reach. This naturally means that the skeptical theses and arguments that support them likewise lose their claim to knowledge (27ff). Medieval skeptics were all moderate: perceiving the human mind as fallible, they trusted that the God-made world was knowable. Modern skeptics revived the ancient tradition and developed its argumentation. Most essential are the arguments associated with René Descartes (formulated also by his predecessors): the dream doubt (cf. Heraclitus or 8 This rejection of representation might involve "the rejection of meaning itself along with the belief that it is worth trying to understand the world (or that there is a world to understand)" (Hawthorn 66). 9 ". . . reports of the disappearance of representation in twentieth-century literature have been greatly exaggerated -as have reports of the disappearance of fantastic writing, for that matter. Much postmodernist fiction continues to cast a 'shadow,' to use Roland Barthes's expression: it continues to have 'a bit of ideology, a bit of representation, a bit of subject.' Indeed, it is precisely by preserving a bit of representation that postmodernist fiction can mount its challenge to representation" (McHale 75) and "what postmodernist fiction imitates, the object of its mimesis, is the pluralistic and anarchistic ontological landscape of advanced industrial cultures -and not only in the United States. . . . So postmodernist fiction does hold the mirror up to reality" (McHale 39).

Enthymema XXXII 2023 / 10
Michel de Montaigne) and the evil-demon doubt (cf. Cicero or Ockham). Though Descartes introduced these arguments in order to establish the certain foundations of knowledge, they contributed to the history of skepticism as arguments undermining our knowledge of the world, including its existence, since arguably Descartes failed to disprove them (157ff). Also David Hume, a moderate skeptic, contributed to skepticism, among other things, the ideas that there is no valid inference from "is" to "should," no certain knowledge of the self's existence, no proof of causality (177ff). The contemporary argument formulated by Peter Ungerthe brain-in-a-vat argument -is a variation on Descartes' demon argument (228ff). However, the most relevant to our subject-matter is the contemporary skepticism about meanings formulated (but not embraced) by Saul Kripke on the basis of his reading of Wittgenstein. The Kripke-Wittgenstein meaning skepticism holds that we cannot be certain of the meaning of our beliefs -their meaning is undefined. Kripke's quus mental experiment presented in Wittgenstein on Rules and Private Language aims to prove that it is impossible to establish the meaning of any word, even a word as obvious as "plus," with reference to any external facts. (In the experiment "quus" is defined as the function that for numbers smaller than 57 is identical with addition, otherwise it equals 5. Kripke argues, among others, that there is no fact that might decide that, when operating on numbers smaller than 57 and using "plus," we mean the plus rather than quus function). In the end, Kripke formulates the meaning skepticism as the claim that "There can be no such thing as meaning anything by any word" (Ziemińska 254). Ziemińska believes that the meaning skepticism can be overcome by recognizing the social nature of language -meanings are social constructs, they are defined through their use in social life (252ff).
Most importantly, with reference to each and every form of (radical) skepticism, Ziemińska argues that attempts to refute it (cf. the work of St Augustine, Descartes, G. E. Moore, Michael Williams or Hilary Putnam) have not been successful. On the other hand, they have not been entirely vain, disclosing the inconsistencies of skepticism, its hidden assumptions and at times errors. At the same time, Ziemińska argues that regardless of the strategy adopted by the skeptic to prevent his (all the skeptics she discusses in her book are male) standpoint from being self-defeating (e.g. suspension of all claims, stating them without assertion, treating them as necessary in the dialectic process to eventually reject them), the position remains pragmatically inconsistent (thus self-defeating) in that among the tacit assumptions inherent in an act of assertion of the skeptical position and/or its justification is the belief that the skeptical statements should be taken as certain. Nota bene, citing Luca Castagnoli, Ziemińska points out that self-refutation is not falsification -skepticism has not been falsified, skeptical theses may be true (269).
Ziemińska does not list the typically postmodern sources of skepticism. These in our opinion include the following epistemic ideas: (1) the mind imposes its categories (frames, ideology) on reality, whatever this reality might be, to the extent that "our reality" (human subjectivity included) is textual, composed of discourse (Kant, Derrida, Foucault), (2) language is selfreferential (Saussure) and burdened with faulty metaphysics (Nietzsche, Derrida), it thus cannot serve to represent reality, (3) apparently objective knowledge is often relative (subordinate) to social power structures (Foucault), (4) metanarratives (science included) no longer legitimize explanations of reality (Lyotard). Also some metaphysical intuitions, such as the idea that there are no structures but merely free play of meanings (Derrida) or that truth is something that happens (Heidegger), might support postmodern skepticism.
As regards the ontology of the mind that we endorse in this paper -interactivism (e.g. Bickhard "The Interactivist Model") -it adopts the fallibilist position whereby knowledge about reality is possible but uncertain (see, e.g., Bickhard, "Levels of Representationality" 195). Interactivism is not in a position to disprove all the skeptical ideas that underpin postmodernism, but by explaining the mechanism of human cognition (representations of reality included) and language's relation to reality, it can successfully (as we will try to show) refute the key skeptical idea that knowledge of the world is impossible. Bickhard explains that it is the mistaken representation-based epistemology (which he terms "encodingism") that is the issue: "The inability to check the truth values of one's own representations -the impossibility of system detectable error -is the core of the radical skeptical argument (Rescher 1980;Sanches 1988Sanches , 1581" and that within pre-interactivist views of cognition this inability seemed final, leading some authors to the idealistic claim that "all we really have are the representational elements themselves, whatever they may be, and there is no support for positing a 'represented'" (Bickhard, in preparation). Bickhard, however, proposes a shift to a different, interactivist ontology of the mind, which he demonstrates to avoid the skeptical problems. Below we go through his argumentation in a little more detail (though the full extent of his argumentation is beyond the space limitations of this paper).

The Fundamental Flaw of the Classic (Encodingist) Model of Representation
Certain organisms (most notably animals), the natural scientist would say, possess the ability to acquire information from the environment, retain it, and use it for the purpose of adaptive behavior. This corresponds to the common-sense claim that living things have some kind of knowledge. If we accept that claim, then we thereby assume that humans as well as other animals are in fact capable of accessing reality epistemically, contrary to the radical skepticist point. There has to be a mistake then either in the scientific claim that cognition is possible, or in the radically skepticist argument that it is not. The best explanation of the ability of living organisms to survive is with reference to their ability to collect and apply information 10 about their environment. In principle, if the information is correct, 11 interaction based on this information will likely be effective. It would be irrational to skeptically reject this hypothesis in favor of others which either postulate supernatural beings 12 or implausible coincidences, or else limit the possibility of human cognition to human-made worlds -in order to survive and make these worlds, humans must have successfully interacted with non-human-made reality in the first place. What we will claim below agrees with some presuppositions of postmodernism but not with its conclusions: skepticism is wrong as an over-the-board denial of the possibility of cognition, but is correct when it refutes the traditional (flawed) model of representation.
Before we delve further into the issue, let us briefly clarify how it pertains to language. The skeptical ideas prevalent in the literary-theory literature reviewed above tend to focus on language and derive from philosophical works that concern it (Wittgenstein, Derrida, Lacan, Saussure, Pierce etc.). This language-centered approach is perhaps not surprising given the fact that fiction (i.e. novels) is mostly constituted by language, as are theories of it. However, when addressing the problem of representation and truth, one cannot stop at the consideration of language, one should rather see it as a special case of a more general phenomenon of representation and communication. Language indeed seems to be the most paradigmatic case of representing, one with which we humans have perhaps the most intimate and immediate relationship. However, language depends functionally on more basic cognitive processes, such as Enthymema XXXII 2023 / 12 sensori-motor representation: 13 it has developed in phylogeny when such more basic natural representing was already in place and develops in ontogeny only after basic sensori-motor representing is sufficiently well established (though both clearly continue to develop in tandem afterwards). Accordingly, we believe that an adequate account of representing, one that can attempt to deal with the problem of skepticism, should be carried out at the general, cognitive level of analysis. Once that is addressed, we can discuss how language fits within that general model (cf. the discussion below).
It is not, however, the case that moving to pre-linguistic cognition in one's analysis automatically solves the skeptical problems. Far from it. Be it linguistic or pre-linguistic, the problems remain: How can mental representation -however it is construed -refer to the outside world, how can it have mental content (i.e. intension or Fregean sense), 14 and how can its accuracy be known to the agent? Mainstream theoretical psychology and philosophy of cognitive sciences continue their discussion of the problem (see, e.g., Hutto "Radically Enactive Cognition" and "REC: Revolution Effected by Clarification"; Miłkowski; Ramsey; Rowlands; Shea). 15 While there are many angles to these discussions, we believe that Bickhard's critical work best defines the central problem here (Bickhard "The Interactivist Model"; Bickhard & Terveen). The following discussion (here and in the next section) summarizes his framing of the issue.
One finds the idea of a signet ring impressing its shape in wax already in Plato's and Aristotle's writings (Bickhard, in preparation); ever since philosophers and now cognitive scientists have attempted almost exclusively to model representing as a form of encoding. Encodings have been formulated in a number of ways: a structural similarity between the representational vehicle and the represented reality, correlation or correspondence between them, or more recently as informational relationships between them (akin to the way that electrical charges in transistors informationally relate to the inputs and outputs of a computer). However, these attempts have always boiled down to positing an encoding -a stand-in relationship between two processes or things (e.g. a garden and its "copy" in the viewer's mind) -as the fundamental basis of natural cognition. 16 As Bickhard argues, this cannot work, and why it cannot work has been pointed out already by the earliest skeptical criticisms. A stand-in relationship to be of any cognitive use for an agent requires the agent to already know both ends of the relationship and their being connected with each other. If encoding were the only kind of representation and the basic cognitive mechanism, it would be unintelligible to the cognizer. Even in the most "natural" kind of representation distinguished by semiotics, i.e. indexical representation, we can take smoke to represent fire if and only if we already know that smoke correlates with fire. 13 The term "sensori-motor representation" has a specific and generally agreed on meaning in cognitive science, but the present point is simply that processes responsible for action-perception coordination are more fundamental than language: for one, there are non-linguistic organisms that have such basic cognition, and second, human higher cognition (including language) seems to build on such more basic cognitive processes. It is clear, for instance, in that children first coordinate their physical activity, and only later learn to use language, see, e.g., Rączaszek-Leonardi et al. 14 Frege was the first philosopher to distinguish between the reference of an expression -the thing it refers to, and its sense or meaning -how it refers to it, with what content. An illustrative example is the morning star and the evening star, which despite having clearly different meanings, were discovered to have the same reference -planet Venus. 15 Though, interestingly, it is not an uncommon position to claim that representing is a pseudo-problem and that we should stop trying to solve it (Brooks; Chemero; Hutto and Myin Radicalizing Enactivism and Evolving Enactivism; Van Gelder). 16 "Natural cognition" is meant here as referring to cognitive phenomena that have emerged without explicit theoretical design (such as cognition of humans and other animals), which is contrasted with artificial cognitive systems that humans create, such as neuronal networks (though whether such artificial systems should in the first place be termed "cognitive" can be debated).
Although based on natural contingencies, such indexical relationships are not inherently intelligible -they need to be learnt either in individual experience and/or phylogenetic selection. And the same applies to iconic and symbolic representations: similarity must be first cognized for the epistemic link between an icon and the represented to be established, and in the case of symbols, the conventional standing in of the symbol for a thing must likewise be already known. Consequently, it seems in principle impossible, first, to learn new representations (more precisely, if "learnt," they would remain meaningless, cf. Searle's Chinese room argument in "Minds, Brains, and Programs"), and, second, to check whether one is correct in one's representation of reality, which renders any error-correction activity impossible.
The former problem -the developmental or diachronic one -consists in the fact that the organism would have to be able to step outside of itself to see what its representation stands for (i.e. what it correlates with). Any sensory "imprint" of an object or its "copy" in the agent's mind carries no information as to what it is an imprint of, and so to take it as a representation of anything (e.g. a mug) requires to already know what this thing is -induction from sense data is an untenable way of acquiring new representations. If encodings are the only kind of representation available to the organism and the basis for all cognition, then basing the meaning of one encoding on another would be the only option. But for this solution to work, some encodings would need to be inborn, otherwise this would lead to an infinite regress. 17 Synchronically (the latter problem), encodingism makes it impossible to check the accuracy of one's representation: In order to establish whether my representation is correct when it is generated in my mind, I would have to also be able to step outside of myself and check. Again, if encodings are all the epistemic access to reality I can possibly have, then there is no way in which I could establish the epistemic value (truth) of my knowledge, for I cannot step outside of myself.
While there are more problems with encodingism (e.g. the frame problems: Ford and Pylyshyn The Robot's Dilemma Revisited; Bickhard and Terveen 213ff), the above two -impossibility of new knowledge and the impossibility of judging the epistemic value of one's knowledgecapture the essence of the skeptical argument and suffice for our purposes in this article. The main conclusion we wish to draw here is that the postmodern variety of the skeptical argument is based on and justified within the encodingist conception of representation which, however, is an untenable position. Although encodings exist (e.g. Morse code), they cannot be the basis of natural cognition, but must rather be derivative from some more basic kind of knowing. While we agree with skepticism inasmuch as encodingism is concerned, we disagree with that the failure of encodingism renders any talk of epistemic value of theories and other texts nonsensical, which appears to be the conclusion drawn by the literary theorists reviewed earlier in the article. Naturally, without an alternative model of representing, the failure of encodingism might indeed seem to indicate that knowledge were impossible at large. There is, however, a model of representing that is entirely unlike encodingism and that is immune to postmodern skeptical objections, forming a promising ground for epistemic claims in the sciences and humanities (as well as everyday cognition). We present the model below, highlighting points of most relevance to literary studies and fiction. Most of what follows is drawn from the work of 17 Positing that there are some basic and not further explainable foundations for knowledge (e.g. selfevident beliefs or beliefs that are evident to the senses) -foundationalism -has a long tradition in philosophy. The theoretical necessity for it within the encodingist framework has been perhaps best illustrated by Fodor, who has followed the encodingist model to its extreme consequence and ended up with radical nativism, where all concepts are innate, though he recognized himself that such a conclusion is absurd (Piattelli-Palmarini 268ff; Fodor, The Language of Thought and "The Present Status of the Innateness Controversy"). Foundationalism is especially prominent in developmental psychology where children are often assumed to come into the world with an innate set of representations, out of which further representations are constructed (see Allen and Bickhard; Mirski and Gut). the originator of interactivism -Mark Bickhard -and should be considered as our presentation of his ideas.

The Interactivist Model of Cognition
At its core, the interactivist model of representing builds on the American pragmatist conception of meaning and truth. This of course holds in general terms only, as different pragmatists held differing views on the matter, some of which are certainly inconsistent with the interactivist position. However, what has been central to American Pragmatism and remains central to interactivism is the conception of meaning and truth as matters of practice. Now, perhaps the greatest charge against this proposal has been that propositions can be useful without being true, and so the pragmatist has been seen as confusing two different issues when identifying truth with utility. At a "high" cultural level this charge of course holds: we certainly can differentiate between usefulness and truth understood as correspondence in paradigmatically human cultural discourses -sometimes a lie is useful for somebody. However, this fact does not mean that there is no way that "usefulness" (or "functionality") can be modeled so as to give us an explanation of what lies at the epistemic and metaphysical bottom of cognition. We believe that Bickhard's work offers a model that explains how being useful (or functional) can form meaning of representation and allow us to model knowledge, and we hope to make it evident in the paragraphs below.
In order to actually model the pragmatist conception of meaning and truth, interactivism starts with what it sees as a more fundamental property -normativity, which it understands as the property of some phenomena in the world to evaluate epistemically or otherwise some other phenomena. For centuries, many philosophers have assumed that the world is bifurcated -there is matter and causes and there is the mind and norms. Representing and thus truth value fall within the category of the normative phenomena, because to represent something is, among other things, to conceive of it as being one way rather than another, and so to have certain "expectations" or anticipations (i.e. norms) as to its nature and possible interactions with it. Philosophers have explored the whole gamut of possible positions on how the twothe realms of causes and matter on the one hand and of norms and the mind on the other − relate to each other: from the clear-cut dualism of Descartes, through idealisms such as Hegel's (everything is like the mind, normative), to modern-day materialistic reductionism (only matter and causal relationships exist) (Bickhard, "The Interactivist Model" 550). We believe these to be unsatisfactory: other than avoiding paradoxes, there seems to be little rationale for the reduction of one realm to another or some dualism that posits limited interaction between the two otherwise independent realms (cf. Descartes). Most centrally, normative phenomena seem to have emerged at some point of the history of our universe (the world before life arguably did not involve norms) and thus the desirable explanation should rather show how the two realms link and linked when the emergence occurred, without denying the reality and respective special characteristics to any of them (reductive monisms sidestep the problem rather than solve it).
The interactivist solution to the problem of norms and causes co-existing is non-reductive emergentism: normative phenomena (such as life, mind and the socio-cultural world) emerged within the universe with their own special properties; being irreducible to causal phenomena, they must be taken as genuine parts of the universe in their own right. To make this account work, a shift in basic metaphysics is claimed to be required: The traditional substance or entity Enthymema XXXII 2023 / 15 metaphysics is unable to account for genuine emergence 18 and a process metaphysics is proposed instead (Campbell The Metaphysics of Emergence; Bickhard, "The Interactivist Model" 548ff). Normative phenomena (including representation and truth value) are seen as emergent within far-from-thermodynamic equilibrium (FFE) process organizations or systems (life included), to the description of which we turn below. 19,20 Systems that exist in far-from-thermodynamic equilibrium (henceforth FFE) conditions are ubiquitous: tornados, earthquakes, and most importantly life itself (though life is a special kind of FFE process). They exhibit qualitatively different properties than systems that are at equilibrium or near enough to it (Jaeger & Liu). 21 FFE phenomena occur when there is continual and sufficient amount of energy funneled through the system. Centrally for us, since they are created within the constant flow of energy, their existence is conditional -in order to exist, the energy flow needs to be sustained and the energy flowing through the system must be greater than in its local surroundings.
Some FFE processes are special because they do things that contribute to the maintenance of the necessary FFE conditions within which they can exist. Such processes are termed selfmaintenant because they contribute to their own maintenance. A candle flame, for instance, stays in an FFE state partially thanks to its own activity: the heat produced by the flame melts the wax which saturates the wick that provides fuel, and the draft of the hot air produced by the flame removes waste products upwards, which also sucks in nearby oxygen to further feed the flame from below. Notice, however, that this is possible only within certain boundary conditions, outside of which the flame would not be self-maintenant and would most likely go out. For instance, there has to be enough oxygen around and wax below. We can say that these conditions are existentially presupposed by how the flame is organized: they are what is necessary for the flame to be self-maintenant and continue existing. Already at this point we could begin to talk about normativity in relation to the candle flame: certain contexts are "good" for the candle flame and others are "bad." The reason why this is only a germ of normativity is that the flame does not contain any processes that could do anything about it staying selfmaintenant -nothing in the flame "cares" about staying in existence. For that, we need the system to be recursively self-maintenant.
The FFE process self-maintenance is only the ground level of normative emergence that gives rise to an ontological hierarchy (Campbell,The Metaphysics of Emergence 191), and the phenomena most relevant for the present article occur further up that hierarchy. It is possible for 18 What is termed here "substance metaphysics" is the either explicit or in many cases implicit view that the fundamental nature of reality is entity -that the world is either an unchanging whole or is composed of unchanging things, atoms or particles that are irreducible building blocks of everything else. Historically, the explicit articulation of the view came from Parmenides and has been widely accepted by the majority of thinkers across the ages, but also contested by many others, especially in modern times (e.g. Derrida; Calamari; Seibt, "The Myth of Substance" and "Existence in Time"). 19 Within the framework of the process metaphysics adopted here, there is no a priori defined ground level for processes, out of which some other processes are composed. Rather, the assumption is that everything (i.e. every process) can be analyzed as a system of processes, or viewed as a single process constituting some other process, depending on the demands of current analysis. For that reason, we freely interchange the terms "process" and "system" depending on the optics most illustrative for the particular context. 20 The centrality of FFE conditions has been partially recognized by another major framework within cognitive science: enactivism, or rather its autopoietic variant (e.g. Di Paolo). For comparisons between the two, see Bickhard ("Inter-and En-activism"); Mirski and Bickhard. 21 An example of a process that occurs near the thermodynamic equilibrium would a robot. One important difference between a robot and an FFE organism is that the former can be brought back into existence even after reaching the equilibrium (e.g. by putting in new batteries), while the latter cannotwhen an organism dies, its organization is lost and cannot be recovered.
an FFE system to be not only self-maintenant as the candle flame, but also recursively selfmaintenant. A recursively self-maintenant system is capable of changing its organization depending on its environmental context to such that is functional (self-maintenant) within that context and allows the system to persist in it. This is, in essence, the definition of life for the interactivist. A sugar-eating bacterium is the classic example here (Alon et al.). They tend to swim up the nutrient (sugar) gradient and they do that by alternating between two modes of behavior -swimming and tumbling. If the bacterium detects that it is swimming down the gradient, it reacts with tumbling and starts swimming and eating again. If its new direction goes up the gradient, the bacterium continues swimming. If the new direction still goes down the gradient, then the bacterium tumbles again. The two activities -swimming and tumbling -are functional in the going-and not-going-up-the-gradient conditions respectively, and the process of switching between the two in appropriate conditions is the recursively selfmaintenant process that ensures the system stays self-maintenant across these two conditions. In short, then, a candle flame would be recursively self-maintenant if it could change its mode of burning depending on its environment, perhaps decreasing in advance the volume of the flame so as to maximally prolong the time of burning until oxygen is once more accessible on a regular basis. The processes of this hypothetical flame that did the job of monitoring the external conditions and switching to appropriate modes of burning would in essence exhibit minimal normativity: they would "care" about and for the flame's existence and could be in error if the presupposed conditions were not, in fact, the case.
If radical (postmodern) skepticism were correct, recursively self-maintenant FFE systems (including living organisms) could not detect their environments and change their mode of organization accordingly. But it is clear that this detection can be achieved in the simplest cases by triggering: the presence or some threshold concentration of some molecule in the immediate environment can cause the organism to switch to a different mode of functioning (e.g. swimming or tumbling). Note that already at this point it is existentially critical for the organism that the conditional links between modes of operation and detected features of the environment be correct; otherwise, if the environment does not satisfy the implicit presuppositions of the adopted mode of functioning, the organism will malfunction and eventually disintegrate. With this we are getting closer to the more paradigmatic cases of normativity and ultimately truth value, though we are not there yet.
It is clear that higher animals instantiate more complex architectures than that of a bacterium. What is crucial for us is that these organisms have evolved a split of the function of detection and the function of action selection (which are one in the case of triggering) into separate sub-systems, one of which detects possibilities for interaction, and the other selects which one of those possibilities to actually engage in. This is necessary for organisms that have complex needs, such as avoiding predators on the one hand and feeding on the other. The former set of processes, the ones doing detection and indication of interaction possibilities, is where we find what in interactivism counts as mental representation. The indicated possibilities for interaction (i.e. mental representations) present different trajectories in the flow of the organism's process -they implicitly anticipate those trajectories and by the same token implicitly presuppose everything about the organism's context that is necessary for the indicated action to be carried out to its fruition. In other words, representation is modeled as anticipatory indication of possible interaction with reality. The often-cited example of the frog should make this clearer: A hungry frog will generally snap its tongue at flies in its visual field, its organism anticipating that the activity will lead to digestion and satisfaction of hunger. If what the frog takes to be entities that will satisfy its hunger are in fact not that -for example, they may be very small stones thrown by a child messing with the animal -then the frog can be said to be wrong about reality, to have a false representation of it, which has immediate consequences for its own FFE existence (the frog may die) and thus the frog's anticipation of what interaction the situation allows or affords is normative. That is, it is not merely descriptive to say that the frog is wrong, it is not only a matter of an observer's ascription of a mistake, the frog's cognitive/representative error is in relation to how the frog is organized and what its organization presupposes about reality, which has real-life consequences for the frog itself, regardless of the frog's lack of reflection on its cognitive activity.
The next step in this evolutionary sequence is the emergence of organisms that can learn from their mistakes, thus making use of the truth value of interactive indications. 22 The basic idea is that interactive indications can be conditionalized and that conditionality can be discovered, in its simplest via a trial-and-error method. The minimal case is the organism randomly varying its mode of operation in face of anticipatory failure -"if I fail to achieve the anticipated final state of my action, then let me try to do it differently." If capable of such learning, the frog might finally discover a way of differentiating fly situations and stone situations, by means of, for instance, visually scanning the surroundings for the presence of people and making tongue-snapping indication conditional on the outcome of such a scan ("eating possible if no human is there, not possible if there is a human"). Such conditionalizing becomes in essence functional reorganization of the mind to better fit with reality; exploring conditional contingencies (or inter-dependencies) between possibilities of interaction with the environment is in essence a quest for truth. The conditionalizing of interactive indications can get extremely complex, forming vast webs of dependencies, especially in organisms as cognitively sophisticated as humans, and is argued in interactivism to be essentially how we know reality, how we can be wrong about it and how we can discover that we are wrong.
The above might strike the reader as too simple or too fundamental to be able to really account for fully blown human knowing. It is, however, only a proof of concept that the posited dynamics manage to overcome the skepticist problem in principle. A fuller discussion can be found in the interactivist literature (e.g. Bickhard works listed in the References; Bickhard & Campbell; Bickhard & Terveen; Campbell The Concept of Truth and The Metaphysics of Emergence; Mirski and Bickhard). The central point is, if such basic organisms as a bacterium or a frog can have epistemic access to reality -that is, if the above proposal holds -then so can a human. Below, we sketch the model of how paradigmatically human cognition is modelled within the framework advocated here.
Interactivism implies constructivism for those organisms that are capable of learning: an organism, for example a child, will generate different flows of anticipatory processes and retain those that are not falsified by reality. In humans, this starts with basic sensori-motor contingencies such as hand-eye coordination when manipulating objects. The retained anticipatory organizations will form the basis for further interactive trials and further expansion of the child's interactive knowledge, hence constructivism. A representation of a physical object then -say, a toy block -is gradually constructed via interaction: the child learns how manipulating the object will change her perceptions of it and anticipates those changes when actually interacting with the object (cf. O'Regan & Noë). Such representations − i.e. stable anticipatory organizations such as those of physical objects − come to "furnish" our reality; it is with the use of such stabilities and the relationships between them that we navigate the world -they form what we tend to call things (or entities) and relations between them. 23 The cognitive phenomena discussed above are entirely implicit; that is, they do (or embody or instantiate) representing, but they do not represent explicitly the fact of representing and its 22 It is not clear whether the frog is capable of learning to differentiate between flies and pebbles given enough practice with both. Such empirical issues are irrelevant here. 23 Notice that such stabilities are what is assumed to be basic by practically everybody (and have received scholarly authorization within encodingist frameworks, where organisms are said to encode objects in their representational units). In interactivism, the acquisition of such stabilities is a non-trivial cognitive developmental achievement. And it is not achieved by most species.
content. An agent with only such processes has phenomenal, but no reflective consciousness (Bickhard "Consciousness and Reflective Consciousness"; cf. Mead 75ff): the agent represents the world through interacting with it, but it cannot interact with its own processes, and thus cannot represent them. In order to explicitly represent content of anticipation, a meta-interactive process is needed, such that will interact not with the outside environment, but with the representational processes that interact with the environment. Such a level-2 interactive system could then form anticipations about the properties of level-1 process organization (because it interacts with them) and effectively engage in what is the framework's model of internal thought (Bickhard, "Levels of Representationality"; Campbell & Bickhard). Likely one of the first deployments of level-2 interaction results in conceptualization of perceived reality into categories that are united by some abstract property, the emergence of what is traditionally meant by concepts, though there will be different kinds of conceptual knowledge in the present framework (see Bickhard, in preparation). A level-1 knower, for instance, could learn to appropriately interact with, say, a hammer, but would not be able to reflectively categorize objects as hammers, using, for instance, some set of necessary and/or sufficient conditions. Clearly, the use of explicit concepts grants multiple adaptive advantages, such as extension of a category to new unfamiliar cases, planning before acting, recall and reconsideration of past events, and so on.
Importantly, once some level-2 way of thinking is formed and used, it changes level-1 organization as well: Drawing on the above example, once in the possession of the abstract concept of a hammer, one creates functional links between anticipatory organizations of level 1 that would otherwise be unconnected. For instance, when some storage place is assigned to objects fulfilling the category of a hammer, putting a hammer away to its place is one of the affordances that perception of a hammer offers, which would not be the case without the reflective categorization. In other words, the properties of level-2 interaction become externalized into level-1 organization, where -and this is a very important point -they themselves can be reflected on by level 2, which gives us functionally level-3 reflection: thinking about one's thinking. And so on, in principle indefinitely, though the physical limitations of brain dynamics clearly limit this multiplication of the levels of cognitive processes. It is important to note that the role of language seems to be substantial here as it is clear that linguistic forms allow for highly systematic externalization of our thinking and subsequent examination of the externalized organization (see Campbell & Bickhard 87).
There is much more to be said about the interactivist model that this short presentation necessarily skips over. However, we believe that the above suffices to show that we now do have an alternative to encodingism, an alternative that does not fall victim to the (postmodern) skeptical critique, which -we believe -renders the epistemic pessimism of theoreticians of literature unnecessary. Cognition is possible and consists in the most primitive form in anticipation of the interactive flow; it is primarily embodied, procedural knowledge, and explicit, conceptual or propositional thinking comes later as a result of reflective abstraction made possible by a second (and higher) interactive level that forms anticipations about that basic know-how. 24

Social Ontology and Language
We would like now to address the realm of language and culture, which is relevant to the problem of cultural phenomena (scholarly theories included) and their epistemic and ontological character. Bickhard has developed a detailed model of both language and social ontology that remains consistent with the rest of his framework and thus like this framework avoids the skeptical arguments (Bickhard "Language as an Interaction System," "Social Ontology as Convention," "The Social-Interactive Ontology of Language"; Mirski and Bickhard). Notably, the interactivist model of language (and social reality) to some extent overlaps with what postmodernist thinkers have claimed about language and its relation to social reality. Here too we arrive at the claim that our social reality (and knowledge about it) is socially constructed and that it can be constructed in a variety of ways. However, that fact does not imply that language is entirely self-referential and can in no way help us explore extra-verbal reality or that our knowledge cannot be evaluated epistemically, which seems to be the conclusion many postmodern thinkers want to draw. Below we explain how this is so.
Social ontology, including language, emerges in the process of conventionalization of the minds of agents that interact with each other. Conventionalization happens when two agents meet the coordination problem (Schelling); that is, when they are attempting to anticipate each other's behavior. This poses the coordination problem because successful interaction between two agents who are simultaneously trying to anticipate each other requires each of them to embed the perspective of another in one's anticipation. And since the other's perspective includes in this context how they anticipate the agent to behave, this leads to an infinite regress -the coordination problem. I am trying to anticipate you anticipating me anticipating you and so on. Perhaps the simplest example is the case of two people trying to pass each other on the street: if I try to move to the right of the other person, I am necessarily presupposing that the other person wants to move to my left, and they have to have a complementary assumption, otherwise we will not achieve coordination and will bump into each other (which, as we all know, sometimes happens). It is plain, however, that we can (in the limiting case randomly) stumble upon mutually consistent anticipations -by sheer luck you might choose the right side and I the left. As we have already discussed, anticipatory success leads to learning -retention of the anticipatory organizations that were successful in the past. In consequence, given enough exposure, two interacting agents will settle on some solution to the coordination situation and will attempt to solve the situation in the future in the way that worked before, and if it works again, this will only strengthen this solution for use in the future. This mutually established way of handling coordination problems is what Bickhard terms situation convention -an anticipatory agreement between two or more agents as to how to behave in a given social situation that poses the coordination problem. Once it is established, it effectively becomes part of reality for the agent involved: just as there are situations involving stones or sticks, there are situations involving particular situation conventions; those conventions emerge and stabilize in a society and have relative independence from any one individual's mind.
Naturally, the problem of identifying what situation convention is occurring or available to enact remains, but in the minimal case conventions can be tied to some collectively epistemically available aspect of the context of interaction (e.g. walking in each other's direction on the street). Much more interesting, however, are cases where it is the agents involved that can enact a situation convention regardless of the context, which can be done because humans can create their own "context" -they can use gestures or grunts that are producible largely without regard of what physical situation the agents are in. Thanks to this it becomes possible for people to develop conventional ways of creating situation conventions: e.g. waving my hand at you can easily be used to make you walk up to me and start a conversation. And since we can create a situation convention like this, we can also modulate an existing situation convention in an analogous way, which in essence amounts to communication: for instance, if in a football-match situation convention the referee raises their hand and blows their whistle, they effectively change the situation from "ongoing play" to "game stopped." What is central to us on this point is that communication is made possible here not due to some coding and decoding of unit messages as it has been traditionally modelled (see, for instance, Gricean pragmatics), but due to mutually compatible anticipations (regardless of any possible minor differences of perspectives or opinions, etc.) between agents as to what the other will do. Situation conventions create an additional, cultural layer on reality that is accessible and manipulable collectively by any person competent in a given culture -it is a real part of our reality, albeit with a different ontology than the physical world within which it emerges. This goes against the frequent postmodern assumption that because cultural reality is conventional and man-made, it is in some sense fictional: it certainly is special and has new fascinating properties, but it is as real as anything else.
It is important to note that the recursion of reciprocal characterizations that is necessarily involved in the coordination problem does not pose a problem for this model for two reasons: (1) a solution to a coordination problem is not achieved by "mindreading" the other person, but by reliance on the precedence (convention; this is drawn from Lewis: Convention: A Philosophical Study 1969); and (2) interactivist representation is cast in terms of anticipatory organizations which implicitly presuppose facts about reality, which means that the unbounded hierarchy of perspectives that is involved in social situations need not be explicitly represented, but only implicitly presupposed (much like animals presuppose gravity in their actions without explicitly representing the phenomenon).
The above is the essence of how interactivism proceeds to account for culture: in a group of people who interact with each other daily for long periods of time there will be a gradual buildup of conventions that are in various relationships to each other: a convention can be embedded in another (a coin toss before a soccer match is embedded within the wider convention of the match), be necessarily preceded by it (employment in modern times is conventionally preceded by signing the contract) and so on. There are many welcome implications of this construal that are worth mentioning: (1) Conventions spread: if a solution worked with one person, an agent will attempt to use it with other people, which explains, among other things, why interactive contact of one culture with another usually leads to some degree of "cross-pollination." (2) Conventions are constituted as anticipatory complementarity between agents' minds in a society, which accounts for the social constitution of reality together with its relative independence from any one mind (the Red Riding Hood story has a wolf in it in virtue of the overall social agreement about that, even if one person thinks there was a fox, not a wolf, there). (3) Conventions are socially constructed but involve presuppositions about non-social world as well (dinner convention involves glasses and other objects).
This last point deserves to be elaborated a little more in this paper. Any object that we can talk about will have a conventional, socially constructed aspect to it in virtue of it being linked to the convention of language -cognizing such an object will offer also the possibility of talking about it with others or conceptualizing it to oneself with the use of language. Physical objects, such as glasses, offer not only the potential interactions with their physical aspects (e.g. grasping them and drinking from them), but also potential situation conventions that involve the glass (e.g. drinking a toast during dinner). Because of that, when we cognize a glass, we cognize it as both part of the physical world and part of the social world, and which aspect of these two we focus on in a particular interaction depends on our current purposes: when you are thirsty and take the glass to drink from it, you are acting on its physical aspects, but when you drink a toast to your host, you are focusing on the socio-cultural aspect of glasses.
On the other hand, the social constitution of reality in this model is not as complete as postmodernism tends to view it: my representation of a glass involves presuppositions about its conventional significance, but it also involves presuppositions about its physical properties, which are truth-evaluable independently from the glass's conventional aspects: if I presuppose that a glass will not break when I drop it on a hard floor, my presupposition will be falsified regardless of what culture I am from (unless I am dealing with a really sturdy glass). Importantly, the conventional aspect of objects usually tends to be consistent and intertwined with its non-conventional aspects: a glass comes into culture in virtue of its physical properties and the conventions that link to it usually presuppose some of its non-conventional properties as well. Consider, for instance, the role of chalices in religious rites: other objects, ones that do not have the relevant physical properties that allow them to hold liquids, could not be possibly used instead in such situation conventions.
There are, however, objects in our socio-cultural reality that are entirely conventional. For instance, all kinds of social institutions such as marriage, nationalities, or fiction. While these conventions involve some kinds of physicality (e.g. a story can be written in a book or told orally; nationality can be tied to a place of birth or one's ancestry), their ontology is not dependent on any particular physical manifestation, but rather consists in the coordination between the minds of people -it is fundamentally conventional. That is, when we talk about such objects, we abstract away from their physical aspects as those aspects are not central to the nature of those objects -what matters is the role these conventions play in the organization of social dynamics, not their role in the physical dynamics (unless, of course, we do want to talk about the physical aspects, as part of literary theory has done in the recent years (see, e.g., Maziarczyk's The Novel as Book)).
A few more words are due specifically on language in this model. As has been already mentioned, traditional models of linguistic interaction view it as a process of coding and decoding messages. This runs into the same basic problems as encodingist models of representation in general -just as I cannot check whether my representation of the physical world is true, I also cannot check whether my representation of what the message is about is true. The traditional transmission model of language is replaced in interactivism with a transformation model (Bickhard,Cognition,Convention,: linguistic utterances are emergent conventional "tools" to transform situation conventions, and since situation conventions are not within any individual head, but rather are a commonly accessible interpersonal phenomena, it is possible for people to understand each other, or rather negotiate the situation as they interact with each other, testing their presuppositions as to what the situation is. In other words, language in the interactivist model is a special case of convention: it is a convention for interacting with situation conventions. It serves this purpose largely due to its systematicity: different parts of speech serve as intermediate steps and constrain the functional effects of the following ones; for instance, "give me the red toy" involves step-like modifications to the situation convention that only together achieve their full meaning (i.e. "give" modifies the situation convention in such a way that makes possible the specific interpretation of "me" in the sentence; if "me" was preceded by some other word, the effect on the situation convention that "me" has would be different). Although linguistics is still dominated by the transmission conception, there is at least one model being developed that similarly to interactivism follows the transformational framework (Gregoromichelaki & Kempson; Kempson et al. "Language as Mechanisms for Interaction" and "Action-Based Grammar").

Interactivist Model and Epistemic Value of Scholarly and Scientific Theories
Now we are able to address the question of epistemic value of scientific theory as well as the possibility of forming theories about the cultural world (theories proposed in the humanities, like the interpretations of postmodernist fiction with which our text opens). In the current model, theories are high-level reflective conventions that can be about (have presuppositions about) both physical reality as well as social reality. Below we unwrap this claim.
Theories are reflective because they are explicit (at least level 2) abstractions of the agent's implicit presuppositions. To have some folk theory of, say, physics is to generalize over the kinds of interaction with the physical world that are possible: for instance, to represent that things fall to the ground when unsupported is to have a conditional generalization that presupposes that any interaction with reality will honor this rule. 25 Of possessing such theories the knowing agent is unconscious. Higher levels of knowing yield such consciousness, which is most typical of science. Scholarly theories about the social reality are not really more problematic than theories about the "natural" world: the framework offers an ontology of culture and it makes perfect sense that humans make assertions about that realm too. Theories about the social world (e.g. about the cultural trend of postmodernism) make assertions about the conventional layer of reality. Since that conventional layer is something that actually exists, the theories can be epistemically evaluated. The fact that a convention is socially constituted and can change with time does not belie this: a statement about a piece of cultural reality at a time t is true as long as the relevant convention exists in the culture at a time t and exhibits the qualities that have been asserted about it. The extent to which it exists as well as the accuracy of the theoretical characterization are part of the epistemic challenge here, but so are they when the natural world is concerned. As long as the theory works (gives correct predictions, which is manifest in interactions that presuppose the theory), does not lead to false anticipations and is not inconsistent with other well-established theories and internally, it should be held as plausible, pending any evidence to the contrary.
In other words, pace postmodernist thinkers discussed at the beginning of the paper, assertions about purely conventional objects are still epistemically evaluable in this model. Consider McHale's concept of the ontological dominant in postmodern fiction. His theory is purely conventional and consists in characterizations of late 20th c. English-language fiction. Demonstrating that the (ontological) theme of the real and fictional status of reality is missing from the fiction in question would falsify the theory, thus the theory is epistemically evaluable.
Of course interpretive statements are difficult to falsify, at least in terms of the contemporary theories of interpretation, which recognize the active participation of the reader's mind in the construction of the work's meanings and values. We do not mean to belittle the challenge. Though conclusive falsification of interpretive statements, such as the highly implausible statement that Julian Barnes's Flaubert's Parrot shows emotions to be purely natural (uncontaminated by culture), appears most of the time unavailable, it seems possible for the purpose of scholarship (whose aim is to understand the work and its impact on culture) to evaluate interpretations in terms of their relative ability to capture the core (objective) meaning of the work (for a detailed discussion of problems related to falsifying interpretive hypotheses, see Teske). Nota bene, some interpretations offering inspired and imaginative interpretations of art, which have been presented as research papers, might best be classified as creative/artistic works, in which the original work of art serves as inspiration for a second-order work of art.
Modern-day scientific theories are conventions because they are socially constructed. Theoretically, the ability to develop reflective representations of reality does not need sociality, but what we mean by theory today is a socially constructed representation of reality, and so it is conventional. Thanks to language and possibility of communication, we are capable of exchanging reflective assertions and to develop theories collectively as an academic community, which allows for their development beyond the life of any individual scientist or scholar.
Finally, the issue of truth needs to be addressed. The interactivist approach to truth can be construed in terms of modal correspondence and consequentialism. Indications of possible interactive processes are truth-bearers because they implicitly presuppose -whether truly or falsely -that an indicated interaction is possible in reality (and thus that the world is such that the interaction is possible in it). Truth of the indications consists in modal correspondence 26 : the indication of a possible interactive process is true if it correctly corresponds to "actual" possibilities of the interactive flow. The frog's indication that it can drink water to quench its thirst is true if indeed by drinking water the frog can quench its thirst. 27 It is important to note that to have such modal indications an agent does not need to have explicit representations: the beginning of knowledge consist only in differentiation of reality into types where the indicated interactions are possible and such where they are not. As we have discussed, this gets extremely complex for human agents as indications conditionalize in various ways, building anticipatory organizations that constitute our representations of various elements of reality. Explicit representing comes about when the agent reflectively anticipates some property of the indications of possible interactions with the environment, which requires a second interactive level, but it bears truth value in an analogous way to the first. A reflective anticipation presupposes that some abstract property holds of the anticipatory organization, and thus also of reality (e.g. that objects always fall when let go of) is directly a property of the agent's anticipatory organization constructed via the past interacting with the world, but indirectly it is a property of the world that the anticipatory organization reflects or "honors." Modal interactive indications involve presuppositions. A presupposition is true insofar as an interaction that makes it will flow as anticipated; it is false insofar as it will not -this is consequentialism. Because each interactive indication involves a huge (possibly infinite?) number of (implicit) presuppositions it might be impossible to correlate with certainty the (un)successful flow of the interactive process with specific presuppositions involved in this interaction. Agents with 2nd and higher levels of knowing can attempt such assessments but presumably they will be unable to conclusively verify or falsify them, which is why it is reasonable to speak of assertions being plausible or implausible, rather than true or false (cf. fallibility).
Science is in the interactivist model a social cognitive process which examines such correlations: by understanding which presuppositions are plausible (contribute to the interactive process developing as anticipated), we learn what reality likely is like. Science owes its capacity for cognition to its self-reflection and social (collective) character, but the scientific method is ultimately the same method that can be found in every cognitive process (and the basic mechanism of evolution) -variation (in science less random than elsewhere) and selective retention: these are theories (sets of interdependent often highly abstract presuppositions) that compete with each other. The ones which survive presumably are closer to truth than the ones that don't: effectiveness is the basic constraint. But science has developed other constraints for theories, such as self-consistency or explanatory power, which serve as further criteria for the selection of theoretical statements. Truth here is the ultimate meta-constraint, i.e. the constraint implicit in all the other constraints. Some positions in philosophy of science have already argued for a similar view of science -a highly informative review can be found in Donald Campbell's "Evolutionary Epistemology". Interactivism can be viewed as a model of cognition that complements those views and provides them with stronger theoretical grounding.

Conclusions
We have thus reached the end of our argument. We hope to have shown that scholarly cognition concerning culture is part of cognition in general, that this cognition at its initial stages is 26 Needless to mention this modal correspondence (between indications of possible interactive processes and such possibilities in reality) does not involve isomorphic similarity. 27 Whether the agent chooses to engage in an indicated interaction or not is irrelevant to the interactive anticipation in question having truth value -it is the anticipation of potentially being able to carry the interaction out that bears truth value, not its actual occurrence.
nonverbal and related to the basic condition of FFE systems, whose instances include human beings. To survive a FFE system/process needs to enter into effective interactions with its environment and this effectiveness is closely related to the accuracy of the FFE system's presuppositions on which its interactions are based. Systems that form false presuppositions cannot survive and cannot compete with systems whose presuppositions are true. The subsequent levels of knowing (e.g. reflection on one's cognitive processes) and products of the mind (e.g. language) have emerged out of the above basic cognitive mechanism. As argued by Donald Campbell, "evolution . . . is a knowledge process" ("Evolutionary Epistemology" 413) -both biological evolution and science work by virtue of (random) variation and selective retention, of genetic mutations and theories respectively. Those theories that do not work (those that lack a fit with reality, i.e. are false) are eliminated in favor of those that do. This argument, we believe, refutes the postmodern variety of radical skepticism.
Thus, pace O'Neill and McHale, we claim that a theory of postmodernism can be evaluated not only in terms of its being thought-provoking and elegant, but first of all in terms of its epistemic value, above all its ability to give an accurate account of the cultural phenomenon in question. A theory that sees the postmodern dominant as ontological is epistemically valuable insofar as ontology is the postmodern dominant. A theory that sees all theories of culture as fictions devoid of epistemic value is epistemically doubtful -implausible -insofar as theoretical constructions of culture can either succeed or fail to properly characterize their object. The former theory will probably be retained, the latter eliminated, though the processes of testing their epistemic value may take time.
To sum up, though certain truth is beyond our reach (cf. fallibilism), cognition is possible; language is a tool that helps people to coordinate their interactions with each other, thus it is not a self-referential system but a convention most closely connected with reality; the humanities, which use language as their primary tool, are built along the model of arguably all cognition (i.e. test their theories/hypotheses in contact with empirical reality or, in the case of nonempirical fields, some theoretical assumptions) and so can help us investigate culture and thereby understand ourselves.