Have a personal or library account? Click to login
Reply to Jon Erling Litland Cover

Reply to Jon Erling Litland

By: Kit Fine  
Open Access
|Sep 2025

Full Article

Continuing from his earlier work (Litland 2023), Jon Litland’s current paper is a rich and ambitious contribution to the project one might call ‘Definition First’, in which definition is taken to be prior to essence. His general account of definition is motivated by two principal applications—to the definition of the logical operations; and to the essential manifold, under which an item (or items) may have alternative definitions—although it is, of course, of considerable interest in its own right.1

In what follows, I should like to focus on the former application and to propose an alternative, more conservative and less complicated, framework in which the logical operations might be defined. For the sake of concreteness, I shall develop this alternative in a very particular way, although there may be other ways, more palatable to other philosophical tastes, in which it may be developed though still largely compatible with the general approach.

I find it somewhat hard to say whether this alternative framework is in conflict with Litland’s. If it is simply a question of intelligibility, then it seems to me that each of us is in a position to make sense of much what the other wants to say. But if it is a question of priority, then my inclination, for reasons which I hope will become clear, is to understand Litland’s framework in terms of some suitable extension of my own framework. The complexity of his approach can then be seen to be a consequence of the complexity in the underlying ideas.

In addition to the questions of intelligibility and priority, there is a more pragmatic question: what framework should we adopt for the purposes of substantive metaphysical inquiry? The question here is not which symbolic framework is most basic but which is most useful. Of course, many different factors can go into making a framework useful. But there are reasons for thinking that Litland’s framework may be far from ideal in this respect (not that this was his concern). For Litland takes definition to be full or complete rather than partial, immediate rather than mediate, and constitutive rather than consequential. But even if someone accepts a broad notion of essence of the kind expounded in ‘Essence and Modality’ (hereafter ‘E&M’), they may have their doubts about the intelligibility of definition in this strict, full-blooded, sense of the term;2 and even if they accept the intelligibility of definition in this strict sense, they may have doubts about our ability to ascertain that anything is a definition in this sense (especially in regard to the question of whether the definition is complete). However, much of what we want to say for the purpose of metaphysical inquiry does not require the strict sense; and so it is best, to the extent that we can, to make use of a framework which steers clear of the notion.

1 Litland’s Framework

Litland is concerned with definition in a full-blooded sense of the term. But many of the issues that he discusses also arise for a broader notion of definition, in which one or more of the requirements of fullness, immediacy, or being constitutive may be dropped. For this reason, I shall use ‘definition’ in a broad sense and make explicit when Litland’s strict notion is in question.

I will adopt another, relatively minor, departure from Litland’s presentation. In E&M, essentialist claims are expressed by means of an indexed operator ‘□s’, whereas he expresses such claims (setting other complications aside) by means of a variable-binding operator ‘sx¯||’. I agree with Litland that the variable-binding operator enables one to distinguish between s as a subject (or definiendum) and as an object (definiens). But the cases in which we need to appeal to such a distinction are relatively rare; and so I will continue to use the indexical notation, but with the understanding that ‘□s φ(s) ’ is to be read as ‘sx¯||φ(x)’, where φ(x) is the result of replacing each occurrence of s in φ(s) with free x.3 One nice consequence of this notational convention is that we can retain the standard formulation of factivity as an implication from □s φ to φ. This, then, is a clear case in which a serviceable notation may diverge from a more basic form of expression.

A definitional, or essentialist, claim can be seen to involve two parts: a head; and a body. The head is the definitional prefix, telling us that a given item, or given items, are to be defined; and the body is the definitional postfix, telling us how the item or items are to be defined.

A standard view, in the case of a singular essentialist claim, is that the head is given by something like an indexed operator □s (or sx) to the effect that s is essentially (an x) such that … and that the body is given by a single sentence (or open sentence) or, under a constitutive view, by a list of sentences (or open sentences) φ1, φ2, …. We can then read □s φ1, φ2, … as saying that s is essentially such that φ1, and φ2, and ….

Litland’s view involves complications both to the head and to the body. I shall deal with the head later, but the body for Litland is a hybrid creature (I refrain from saying ‘monster’). It involves not just sentences, or the specification of propositions, but also the specification of rules. Thus, the general form of an essentialist claim for him (again, ignoring complications to the head) will be:

sφ1,φ2,,σ1,σ2,,

where φ1, φ2, … are the specifications of propositions, as before, whilst σ1, σ2, … are the specifications of rules.4

But how is such a claim to be read? Litland tells us that we should think of it as saying that s is by definition such that ‘one has’ the propositions φ1, φ2, … and the rules σ1, σ2, …. This is a strange way of talking. For what is it for an item to ‘have’ a proposition or a rule?

I take it that what he means, at the very least, is that the definition of an item will be constituted by certain propositions and certain rules. But this still leaves open the question of how the item relates to the propositions and the rules. And the answer surely is that the item is by definition such as to verify the propositions and validate the rules. But the relationship is different in the two cases; and this suggests that we should be thinking in terms of a two-headed rather than a two-bodied definition. There are two complementary ways in which an item may be defined—either in terms of the propositions that it verifies or in terms of the rules that it validates; and these two forms of definition will have different roles, with the one justifying the assertion of certain propositions and the other justifying the application of certain rules.

However, this proposal may not serve Litland’s purposes, since he wants to bind across the propositional and inferential components of a definition. But there is a more radical difficulty with his proposal, since it is hard to conceive of a definition except in terms of a condition that the item to be defined must meet to be what it is. For what sense can we give to an item being defined (in part) by some thing other than a condition unless that thing is somehow associated with a condition by which the item is to be defined? Indeed, in the present case, if we can make sense of what it is for an item to validate a rule by definition, then we should be able to make sense of what it is for the item to validate the rule simpliciter. But then surely for the item to validate the rule by definition is simply for it to be true by definition that it validates the rule, i.e. for it to be true by definition that the rule is valid.

From this point of view, then, the inferential component of a definition should simply be regarded as a special type of propositional component. Consequently, in place of Litland’s two-bodied formulation:

sφ1,φ2,,σ1,σ2,,

or in place of the two-headed alternative:

Ps σ1, σ2,  , and Is σ1, σ2,  ,

(using P and I, respectively, for propositional and inferential definition), we should have:

s φ1, φ2, , V(σ1), V(σ2),  ,

where the V1), V2), … are now propositions to the effect that the rules σ1, σ2, … are valid. The two-headed or two-bodied creature is transformed into a normal single-headed, single-bodied creature.

But natural as this line of thought may be, it faces a difficulty, mentioned by Litland in another connection (§5), which is that the definition will be inferentially inert. It may tell us that certain rules are valid. But these are simply other propositions; they do not in themselves enable us to make use of the rules. And it is because of this difficulty that Litland wants to provide a separate account of the inferential component of a definition, which may then be used to justify the application of certain rules of inferences and not merely the assertion of certain propositions.

Before turning to the definitional head, we need to say something about Litland’s conception of a rule. Given some propositions pp and a proposition q, he takes there to be a rule ppq that allows one to infer q from pp.5 Thus, a rule for him is specific rather than schematic. It will concern particular propositions pp and q; and this means that the plural/propositional variables pp and q occurring in the specification of a rule ppq will be free variables, subject to quantification or other forms of binding from outside of the specification of the rule.

The question now arises as to how we should state that disjunction, say, is by definition subject to the rule ppq of disjunction introduction on the right. For we want it to be subject to this rule no matter what the propositions p and q might be. But how is the desired generality to be expressed? The obvious and most natural answer is that it is to be expressed through the use of the universal quantifier. Now we cannot say ∀pq(□ ppq), since it will then follow that □ (p0 | p0q0) for any specific propositions p0 and q0 yet, in general, ∨ will ‘know nothing’ of these propositions. Nor can we bring the quantifiers inside to the body and say □pq(ppq), for (ppq) is the specification of a rule not a proposition and the quantifiers ∀p and ∀q, at least as they are usually understood, can only meaningfully apply to the specification of a proposition.

Litland’s solution is to attach the desired generality to the definitional operator. Thus he would write p,q  (ppq) instead of ∀pq(□ ppq) or □pq(ppq). There is a question of how the locution p,q  (ppq) is to be understood. He suggests (§5) that we read it as saying that ∨ is, by definition, such that for any propositions p and q ‘one has’ the rule ppq; and, given that rules have no in-built generality, it is hard to see how else the locution might be understood. But this then strongly suggests that the generality properly attaches to the body of the definition and not to its head, that there is some internal logical complexity in the body of the definition by which the desired generality is to be conveyed; and yet there is nothing in his notation to indicate what this might be.

There is a further difficulty. For given that (ppq) denotes a specific rule, it is highly plausible that the desired generality be conveyed, if not by the universal quantifier itself, then by something analogous to the universal quantifier. But this spells disaster for Litland’s project of defining the logical operations independently of one another, since the definition of the truth-functional operations would appeal to the universal quantifier (or its analogue); and, by the same token, the definition of the universal quantifier, or its analogue, would turn out to be circular. The threat is doubly compounded once one takes into account that, when it comes to discharge rules, Litland’s definition of the logical operations requires a restricted form of quantification, in which the antecedents concerning the auxiliary arguments are themselves logically complex, and calls, in addition, for a highly logically complex understanding of how the rules are to be applied. None of this complexity appears in his official notation. But you do not make something go away by hiding it from view.6

2 An Alternative Framework

We face two main problems in developing a theory of definition that can accommodate the definition of logical operations. One is the circularity problem, the problem of defining the logical operations without making use of those operations; and this problem becomes especially acute when it comes to stating how the rules by which a logical operation is defined may be of general application. The other is the inference problem, the problem of ensuring that the definition of the logical operations will actually license the use of the inferences by which they have been defined. Litland proposes to solve the first problem by moving any unwanted logical complexity in the body of a definition to its head; and he proposes to solve the second problem by allowing the body of a definition to appeal to rules in addition to propositions.

I should now like to propose a simpler, more conservative, framework under which there is no additional complexity either to the head or to the body of the definition. Thus, the definitional head will take the form □s (or sx) and the definitional body will consist in the specification of a proposition (or list of propositions). To circumvent the difficulties to which Litland’s own framework is a response, I shall need to make two fundamental changes to his conception of a rule. Rules for me must be schematic rather than specific; and they must be modeled on the sequent calculus rather than on the system of natural deduction (in which assumptions can be discharged). Let us deal with each of these changes in turn.

As I have mentioned, a rule for Litland is specific: given some propositions pp and a proposition q, there will be a rule ppq concerning those very propositions. By contrast, I take a rule to be a scheme, as with p / pq, which allows one to infer an arbitrary proposition of the form pq from any given proposition p. Thus, whereas we need to know what pp and q are in order to know what rule is denoted by Litland’s ppq, we do not need to know what p and q are (indeed, such information is irrelevant) to know what scheme is denoted by our p / pq.

The propositional variables pp and q in the specification of the Litland rule ppq are free variables and subject to binding, whereas the propositional variables in the specification of one of our rules, such as p / pq, are already bound, so to speak, and not subject to further binding. All the same, our schematic rules can still be applied to specific propositions. Thus, if the variables p and q are taken to stand in for the specific propositions p0 and q0, then the schematic rule p / pq can be taken to have a Litland-style rule p0p0q0 as an instance.

These rules, whether Litland’s or our own, are propositional rather than formal in character; they concern the inference of one proposition from others rather than the inference of one formula from others. This means that the rules, like the propositions themselves, should be regarded as abstract entities in their own right. There is, however, a natural correspondence with the formal rules familiar from logic. Where p and q are particular sentence-letters, the formal rule p / p ∨ q would correspond to a rule in Litland’s sense; and where A and B are meta-linguistic variables ranging over formulas, then the formal rule A / A ∨ B would correspond to a rule in our sense. However, whereas A / A ∨ B and D / D ∨ E are two distinct meta-linguistic rules, at least when regarded as formal expressions of the meta-language, p / pq and r / rs are the very same abstract rule. If one wanted to be able to think of an abstract schematic rule as being composed of abstract schematic elements in much the same way in which a formal schematic rule is composed of schematic letters, then we might think of these elements, the abstract counterpart of the schematic letters, as arbitrary objects in the sense of Fine (2017), though such a view is by no means forced upon us.

There is a temptation to regard talk of ‘the inference scheme A / A ∨ B’ or of ‘the form of inference p / pq’ as a façon de parler, ultimately to be cashed out in terms of quantification over formulas A and B or over propositions p and q. But this is not our point of view. We take these expressions seriously as providing definite reference to a distinctive kind of entity, the scheme or form, and not as indicating indefinite reference to a specific rule or form of inference that is only made definite once values for the schematic letters ‘A’ and ‘B’ or the propositional variables ‘p’ and ‘q’ have been given.

For later purposes, it will sometimes be helpful to make explicit the binding that we may take to be implicit in the specification of a form of inference. Thus, our previous inferential rule p / pq might now be represented as the rule p / p, q pq, schematic in p and q.7 Such a notation would then enable us to distinguish between the constant and schematic elements in a rule. So, for example, given a particular proposition p0, we might wish to consider the ‘hybrid’ rule p0 / q q that is specific in regard to p0 but schematic in regard to q (a rule which would be classically valid when p0 is the falsum proposition). But in order to avoid a trail of subscripts, we will follow the convention that a rule of the form --- // …may be used to indicate that all of the schematic variables in --- and … are to be subscripted. We can then write p / p, q pq more simply as p // pq and, where R is a specification of a rule --- / … we may write |R| in place of --- / … .

Our schematic rules are therefore closer to how Schroeder-Heister (2014) conceives of the inferential rules. But even though his symbolism is much the same as ours, its interpretation is very different. Schroeder-Heister writes, ‘According to this reading [the one he adopts] the variables occurring as indices to the rule arrow | function as universal quantifiers’ (2014: 1190). But this is not how we are thinking of them. A rule, such as p / pq, exhibits a certain form and is not tied to the universal quantifier or, indeed, to any form of quantificational binding. Litland (footnote 25) writes as if we face a choice between binding the specification of a rule from the inside or from the outside. But for us, there is no choice; if the rule is to exhibit a form then the binding by which the form is made manifest can only come from the inside.8

Instead of saying, with Litland, that it is definitive of ∨ that ‘one has’ the specific rule ppq for any propositions p and q, we can now say that it is definitive of ∨ that the schematic rule p // pq is valid (□ V(p // pq)), thereby dispensing with rules in the body of the definition in favor of propositions.

We thereby avoid the threat of circularity. For Litland, it is definitive of disjunction that for any propositions p and q ‘one has’ the rule ppq; and this invites the concern that some notion of universal quantification is implicitly presupposed in the body of the definition. But this is not a problem for us. For there is no reasonable way of understanding the binding implicit in the specification of a schematic rule in terms of quantification. In specifying a rule, such as p // pq, we are simply specifying its form and saying nothing of a quantificational nature. We may still need to appeal to a distinctive type of binding, as when we specify the rule in the manner of p /p, q pq —or, alternatively, we may perhaps appeal to a distinctive ontology of arbitrary propositions—but all of this can properly be regarded as part of the structural or pre-logical apparatus of form in terms of which the logical operations should themselves be understood.

However, our proposal comes with a conceptual burden. For in defining the logical operations we need to appeal, not only to schematic rules but also to the concept of validity. Whether this is an additional cost is hard to say, for one might think that a rule can only be a constitutive part of the definition of a logical operation for Litland in so far as its validity is a constitutive part of the definition. From this point of view, then, our own definition has the advantage of being more explicit about what is actually involved in defining the logical operations.

But what is this concept of validity? It is a concept of formal validity, simply in the sense that it has application to forms, as embodied in the schematic rules. There is, of course, a further question of whether one should understand the validity of a schematic rule in terms of the validity of its instances. On this view, the schematic rule p // pq will be valid because each of its specific instances p0p0q0 is valid. But we cannot accept such a view since then in saying that it is definitive of ∨ that p // pq is valid we are, in effect, saying something quantificational about each of its instances. It seems to me, moreover, that this is not a view we should accept. It is far more plausible to maintain that a specific instance p0p0q0 is valid because it is of a form, p // pq, that is valid. Indeed, there has always been a strong inclination to say that a logical rule of inference is valid in virtue of its logical form; and one way to make sense of this, and perhaps the only reasonable way, is that the specific rule is valid because it is of a form that is valid. If this is right then the concept of validity that we wish to use is formal in a more substantive way: not only does it have application to forms, but its primary application is to forms.

Litland wishes to distance himself from some possible epistemological implications of his account (§5, fn. 28), but I would wish to embrace such implications. It seems to me that our knowledge that a basic rule, such as p // pq, is valid is not properly based on its instances (just as our knowledge that 2 + 2 = 4 is not properly based upon counting batches of marbles). Rather, in a proper ordering of our knowledge, we know that the instances of the scheme are valid through knowing that the scheme is valid. In this way, then, our account of how the logical operations are to be defined can be seen to rest upon, and also to support, a view that grants conceptual and epistemological primacy to a formal concept of validity.9

I have so far dealt with direct rules of inference. But what of discharge rules, such as disjunction elimination? In order to accommodate such rules, Litland has to complicate the logical form of the head of a definition so that it can make reference to the arguments whose assumptions are to be discharged and he also has to complicate our understanding of the body of the definition, so that it can allow for the discharge of those assumptions (§5). But how can we do the same within the body of the definition without danger of circularity?

By not even trying. Instead of modeling the definition of the logical operations on the rules of natural deduction, we may model them on the rules of the sequent calculus. What corresponds to disjunction elimination within the sequent calculus is the following rule:

Δ, A  C      Δ, B  CΔ, A  B  C

Using Δ now for a plurality of propositions, this corresponds, in the propositional sphere, to the rule V(Δ, p / r), V(Δ, q / r) / V(Δ, pq / r) to the effect that an arbitrary proposition of the form V(Δ, pq / r) can be inferred from propositions of the form V(Δ, p / r), V(Δ, q // r). The discharge rule is thereby converted into a second-order direct rule concerning the validity of certain first-order rules. We may then take it to be definitive of disjunction that this second- order rule is valid (□ V(V(Δ, p / r), V(Δ, q / r) // V(Δ, pq / r))), thereby avoiding any appeal to arguments or to the discharge of assumptions, as occurs in Litland’s account.

We should take careful note of the implicit binding in the formulation of this rule. The rule is not V(Δ, p // r), V(Δ, q // r) / V(Δ, pq // r) with inner bindings, since the schematic rules Δ, p // r and V(Δ, q // r) in the antecedent are not even valid, but V(Δ, p / r), V(Δ, q / r) /Δ,p, r V(Δ, pq / r) with outer binding as indicated. Thus, the second-order rule concerns a form of inference in which we make a passage from the validity of certain specific first-order rules to the validity of another specific rule.

The present use of second-order rules requires two broadenings in our understanding of the concept of validity. Before we only applied the concept to schematic rules, as in V(p // pq). But it must now be applied to specific rules – as in V(Δ, p / r) above (although the variables Δ, p, and r are later bound in the context of the second-order rule). This raises the question as to how the application of the concept of validity to specific rules is to be understood. A natural view is that a specific rule should be taken to be valid if it is an instance of a valid schematic rule. We might even suppose that there is the concept of the logical form LF(R) of a rule R, where LF(R) is the schematic rule S of which R is an instance and which is subject to the requirement that LF(LF(R)) = LF(R). We could then take V(R) to hold, for RLF(R), when V(LF(R)) - thereby reducing the application of the concept of validity to the cases in which the application is to a schematic rule.

In the second place, we need to adopt a broader conception of form. Before we could take it to be logical in a narrow sense of ‘logical’. But we must now be able to treat the concept of validity as itself something which should remain fixed. If, for example, we are to take the schematic rule V(Δ, p / r), V(Δ, Q / r) // V(Δ, pq / r) to be valid then V must be treated as a constant rather than a variable element in the resulting form. Logical is meta-logical.

We should also note that there are different ways in which we might use the sequent calculus as a model for the rules by which the logical operations are to be defined. Under a cut-free formulation of the sequent calculus, the only first-order rule to which we would need to appeal in defining the logical operations (or the concept of validity) would be p // p and, instead of appealing to the first-order rule p // pq, we could appeal instead to the second-order rule V(Δ, p / q) // V(Δ, p / qr). Under what I regard as a more natural formulation, we would retain the first-order rule p // pq, but we would then need to appeal to Cut, i.e V(Δ / p), V(Δ, p / q) // V(Δ, p / q), as a second-order rule (and possibly to the propositional counterpart of other structural rules as well, depending upon how the other second-order rules were formulated).

3 Inferential Potency

But what of the problem of inferential inertness? We get that certain propositions of the form V(Δ // r) are true by definition. But how do we get from the definitions licensing the truth of such propositions to their having inferential import?

There are three aspects to this problem, at least from our own point of view. In the first place, when a derived rule is valid then we would like it to be valid in virtue of the nature of the logical operations involved in the formulation of the rule. To do this, we may let derivations in the sequent calculus be our guide. Consider, by way of example, the derivation of p ∨ q → q ∨ p:

(1)p → q ∨ pby ∨R1
(2)q → q ∨ pby ∨R2
(3)p ∨ q → q ∨ pfrom (1) and (2) by ∨L.

We wish to transform this derivation into a demonstration that V(pq // qp) is true in virtue of the nature of ∨. We may write this as □ V(pq // qp). But note that we can no longer read □ … as saying that … is true by definition in Litland’s strict (full, immediate and constitutive) sense of the term. Using ■ to indicate definition in the strict sense, we wish □ to be an extension of boldfaced ■ in such a way that definitional truths in the strict sense of the term provide an explanatory basis for definitional truths in the extended or weaker sense of the term. So, even though it is not strictly definitive of ∨ that pq // qp is valid, we can see how the validity of pq // qp can be explained on the basis of what is strictly definitive of ∨.

Let us assume that we have the following strict definitional truths:10

(∨R1) V(p // qp)
(∨R2) V(q // qp)
(∨L) V(V(Δ, p / r), V(Δ, q / r) // V(Δ, pq / r))

We may then argue as follows (where the justifications are given below):

(1) V(p // pq)from (∨R1) by Extension
(2) V(q // pq)from (∨R2) by Extension
(3) V(V(Δ, p / r), V(Δ, q / r) // V(Δ, pq / r))from (∨L) by Extension
(4) V(V(p / qp), V(q / qp) // V(pq / qp))from (3) by Schematic Instantiation
(5) V(pq // qp)from (1), (2) and (4) by Second-order Closure

Let us consider the various steps involved in this argument:

  • (i) Extension states that:

    If a proposition is true by definition in the strict sense (as given by ■…) then it is true by definition in the weak sense (as given by □…).

  • (ii) Schematic Instantiation states:

    If a schematic rule is valid by definition (in the weak sense) of some items then so is any purely schematic instance of the rule.

Thus, setting Δ = ∧ (the empty plurality), p = p, q = q and r = qp, we see that the rule V(p / qp), V(q / qp) // V(pq / qp)) is an instance of V(Δ, p / r), V(Δ, q / r) // V(Δ, pq / r)); and so line (4) follows from line (3) by Schematic Instantiation.

  • (iii) Suppose that R1, R2, … and S are specifications of first-order rules (in which some of the variables may not be bound). Then Second-order Closure states:

    If V(|(V(R1), V(R2), … / V(S))|) and V(|R1|), V(|R2|), … are true by definition then so is V(|S|).

We might here think in terms of there being the following meta-logical rule of inference:

V(|R1|), V(|R2|), V(|S|)  |(V(R1), V(R2),  / V(S)|

This enables us to infer the validity of the schematic rule |S| from the validity of the schematic rules |R1|, |R2|, … given the validity of the rule that allows us to infer the validity of the corresponding specific rule S from the validity of the corresponding specific rules R1, R2, … . We may, in other words, treat the free propositional variables R1, R2, … and S in (V(R1), V(R2), … | V(S) as standing for arbitrary propositions rather than specific propositions. The rule is therefore analogous to the distribution principle for a modalized universal quantifier, with the schematization of the variables corresponding to their modalized quantification.

In applying this principle to the present case, we may set R1 = (p / qp), R2 = (q / qp), and S = (pq / qp). Line (1) then gives us □ V(|R1|), line (2) that □ V(|R2|), and line (4) that □ V(|(V(R1), V(R2) / V(S))|), from which it follows by the rule that V(|S|).

A second issue concerns the weak definitional nature of items when non-logical notions are in play. Suppose it is strictly definitive of Socrates to be a human being. Then we may wish it to be weakly definitive of Socrates, disjunction and being a beetroot—all taken together—that Socrates is a human being or a beetroot. In order to reach this conclusion, we require two further principles. The first is a principle of First-Order Closure for weak definition:

If the propositions p1, p2, … and V(p1, p2, … / q) are true by definition (of some given items) then so is the proposition q.

The second is a more general principle of Instantiation:

If V(p, q, … // r) is true by definition of some items and pʹ, qʹ, … , rʹ is an instance of p, q, … , r, perhaps involving further constituents, then V(pʹ, qʹ, … // rʹ) is true by definition of those items along with the further constituents.11

Of special interest is the case in which one of the additional constituents is itself a logical operation. In the case of the rule p || pq, for example, we might wish to instantiate it to p // p ∨ (qr). But since disjunction ‘knows nothing’ of conjunction, the definitional validity of this rule will require reference to the conjunctive operation, ∧, as well as to the disjunctive operation, ∨.12

We can now carry through the derivation that it is weakly definitive of Socrates, disjunction and beetroothood that Socrates is a human being or a beetroot. For let p1 be the proposition that Socrates is a human being and p2 the proposition that he is a beetroot. Then it is strictly definitive of Socrates that he is a human being (■soc p1) and hence weakly definitive of Socrates that he is a human being (□soc p1). Likewise, it is strictly definitive of disjunction that disjunction introduction is valid (■ V(p // pq)) and hence weakly definitive of disjunction that disjunction introduction is valid (□ V(p // pq)). By General Instantiation, it is therefore weakly definitive of Socrates, disjunction, humanity and beetroothood that the specific rule (p1 / p1p2) is valid ((□soc,∨,H,B V(p1 / p1p2)). By an obvious augmentation principle for weak definition, it is also definitive of Socrates, disjunction, humanity, and beetroothood that Socrates is a human being (□soc,∨,H,B p1). By First-order Closure, it is definitive of Socrates, disjunction, humanity, and beetroothood that Socrates is a human being or a beetroot (□soc,∨,H,B p1p2)); and so, by the principle of Chaining (Fine 1995), it is definitive of Socrates, disjunction, and beetroothood that Socrates is a human being or a beetroot (□soc,∨,B p1p2)).

The present notion of weak definition is consequentialist; it tells us that if certain propositions are true by definition then so are certain consequences of those propositions. But it is significantly different from the consequentialist notion of essence in Fine (1995). The latter notion of essence is closed under logical consequence. Given that p0 belongs to the essence of some items, it will be automatic that p0p0 belongs to their essence, since p0p0 is a restricted logical consequence of p0 (one containing no extraneous material). Our notion of definition, by contrast, is closed under structural consequence.13 Given that p0 belongs to the weak definition of some items, it is not automatic that p0p0 belongs to their weak definition. This will in general require that the validity of the rule p0 / p0p0 belongs to the weak definition of the items and this, in its turn, will in general require that disjunction be among the items to be defined.

A final issue is to justify the inferential role of the logical operations on the basis of their definitions. We wish, for example, to be able to justify the inference from the specific proposition p0 to the specific proposition p0q0. For us, the justification is via the notion of validity. It is part of the strict definition of ∨ that the schematic rule (p // pq) is valid. But given that it is valid (or, alternatively, given the induced validity of p0 / p0q0), we are then justified in using the rule to infer p0q0 from p0.

Litland operates with a different conception of how the inferential role of the logical operations is to be justified. It is not that the definition of the logical operations justifies the validity of a given rule and the validity of the rule then justifies its use in inference. Rather, the definition directly justifies the use of the rule in inference. So, for example, given that the rule ppq is part of the definition of ∨ (for any p and q), we are then directly justified in using that rule to infer p0q0 from p0 for specific propositions p0 and q0.

I am left, of course, with the problem of how the mere claim that a rule of inference is valid can justify the adoption of the rule in inference. But does not Litland face a similar problem? For how can a mere definition, even when it is given by means of a rule, be any more capable of justifying the adoption of the rule? My account is less direct than his but each, it seems to me, is haunted by the ‘adoption problem’.

Still, the two accounts lead in very different directions. I am interested in developing a theory of definition. In particular, given the strict definition of the logical operations in terms of what is valid, I wish to know what else might thereby be true. Litland, on the other hand, is interested in developing an inferential calculus. Given the strict definition of the logical operations, he wishes to know what might thereby be inferred. Thus, whereas for me Factivity takes the form of a conditional proposition to the effect that what is true by definition is true, for him it takes the form of an inference from something being true by definition to its being true. The two approaches are, of course, related. But it seems to me that they represent two fundamentally different perspectives on what is involved in providing an inferential definition of the logical operations and perhaps, relatedly, on whether one should adopt a propositional or dispositional account of our understanding of the logical constants.

Notes

[1] I should like to thank Jon Litland and the two referees of the paper for many helpful comments.

[2] Some doubts on this score are aired in Fine (1995: 58).

[3] We may extend the index notation, when embedded uses of the essentiality operator are not in play, by underscoring the occurrences of s in their role as object - with ‘sx¯||φ(x,s)’ thereby becoming ‘□s φ(s, s)’.

[4] Litland has pointed out to me in correspondence that we could replace propositions by zero-premise rules and thereby achieve uniformity in the body of a definition. But is there not a clear distinction between a proposition (of which only truth is demanded) and a zero-premise rule (of which validity must also be demanded)? Also, what of other definitions, in which only propositions are normally taken to be involved? Must they now be taken to concern zero-premise rules?

[5] We should, of course, allow pp to be a null plurality, ∧, and, in a more general treatment, we might wish to talk of sequences rather than of pluralities of propositions.

[6] I am aware that, in Fine (2015; 2016), I proposed attaching a form of variable-binding to the essentialist or ground-theoretic operators. But the application is to something that is itself generic in character, as with what it is for two arbitrary sets to be the same, and not to the identity of particular items.

[7] Glazier (2016: §5) employs a similar form of non-quantificational binding in stating metaphysical laws.

[8] Litland remarks that ‘since we need the definitional operator to bind and generalize variables anyway, we opt to let it do all the variable binding’; and he has explained to me in correspondence that the need arises from wanting to say that when r is the disjunction of p and q then it is by definition the disjunction of p and q. But I would be inclined to distinguish between ‘pq’ and ‘the disjunction of p and q’ and to restrict the former to a purely inferential definition.

[9] Similarly, I would wish to argue (Fine 2005: 7,246), that a sentence is a logical truth not because its truth is preserved under arbitrary substitutions for its non-logical constants but because it is an instance of a scheme that is a logical truth.

[10] We might also wish to derive □ V(p // pq) from □ V(p // p), which, in its turn, could be derived from □V V(p // p) through an application of Chaining, or ‘Mediate Essence’.

[11] Much as in the rule RC of Ditter (2022).

[12] I originally provided an account of weak definition in which extraneous material is allowed, though this meant that we could no longer allow Chaining and no longer define a dependent item to be a distinct item that occurs in a weak definition of the items to be defined. Thanks to one of the referees for suggesting how we might adopt the present more complex, yet more satisfactory, account of weak definition.

[13] It is in this respect akin to the notion of relative logical consequence in Correia (2012: 648).

Competing Interests

The author has no competing interests to declare.

DOI: https://doi.org/10.5334/met.211 | Journal eISSN: 2515-8279
Language: English
Submitted on: Apr 18, 2025
Accepted on: Jul 9, 2025
Published on: Sep 29, 2025
Published by: Ubiquity Press
In partnership with: Paradigm Publishing Services
Publication frequency: 1 issue per year

© 2025 Kit Fine, published by Ubiquity Press
This work is licensed under the Creative Commons Attribution 4.0 License.