text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Comparative structural dynamic analysis of GTPases
GTPases regulate a multitude of essential cellular processes ranging from movement and division to differentiation and neuronal activity. These ubiquitous enzymes operate by hydrolyzing GTP to GDP with associated conformational changes that modulate affinity for family-specific binding partners. There are three major GTPase superfamilies: Ras-like GTPases, heterotrimeric G proteins and protein-synthesizing GTPases. Although they contain similar nucleotide-binding sites, the detailed mechanisms by which these structurally and functionally diverse superfamilies operate remain unclear. Here we compare and contrast the structural dynamic mechanisms of each superfamily using extensive molecular dynamics (MD) simulations and subsequent network analysis approaches. In particular, dissection of the cross-correlations of atomic displacements in both the GTP and GDP-bound states of Ras, transducin and elongation factor EF-Tu reveals analogous dynamic features. This includes similar dynamic communities and subdomain structures (termed lobes). For all three proteins the GTP-bound state has stronger couplings between equivalent lobes. Network analysis further identifies common and family-specific residues mediating the state-specific coupling of distal functional sites. Mutational simulations demonstrate how disrupting these couplings leads to distal dynamic effects at the nucleotide-binding site of each family. Collectively our studies extend current understanding of GTPase allosteric mechanisms and highlight previously unappreciated similarities across functionally diverse families.
Introduction and the C-terminal membrane anchoring lobe (lobe2) [13,14]. Several allosteric sites were identified in lobe 2 or between lobes, including L3 (the loop between β2 and β3), L7 (the loop between α3 and β5), and α5. Importantly, α5 is the major membrane-binding site and has been related to the nucleotide modulated Ras/membrane association [15]. In addition, binding of small molecules at L7 has been reported to affect the ordering of SI and SII [16]. Intriguingly, recent studies of Gα have revealed nucleotide associated conformational change and bilobal substructures in the catalytic domain largely resembling those in Ras [17,18]. The allosteric role of lobe 2, which contains the major binding interface to receptors, has also been well established for Gα [18][19][20][21][22][23][24][25][26][27]. Furthermore, the comparison between G proteins and translational factors via sequence and structural analysis indicates a conserved molecular mechanism of GTP hydrolysis and nucleotide exchange, and cognate mutations of key residues in the nucleotide-binding regions showed similar functional effects among these systems [2,6,7,12]. Collectively, these consistent findings from separate studies support the common allosteric mechanism hypothesis of GTPases and underscore a currently missing detailed residue-wise comparison of the structural dynamics among different GTPase superfamilies.
In this study, we compare and contrast the nucleotide-associated conformational dynamics between H-Ras (H isoform of Ras), Gαt (transducin α subunit) and EF-Tu (elongation factor thermo unstable), and describe how this dynamics can be altered by single point mutations in both common and family-specific ways. This entails the application of an updated PCA of crystallographic structures, multiple long time (80-ns) MD simulations, and recently developed network analysis approach of residue cross-correlations [18]. In particular, we identify highly conserved nucleotide dependent correlation patterns across GTPase families: the active GTP-bound state displays stronger correlations both within lobe1 and between lobes, exhibiting an overall "dynamical tightening" consistent with the previous study in Gα alone [18]. Detailed inspection of the residue level correlation networks along with mutational MD simulations reveal several common key residues that are potentially important for mediating the inter-lobe communications. Point mutations of these residues substantially disrupt the couplings around the nucleotide binding regions in Ras, Gαt and EF-Tu. In addition, with the same network comparison analysis, we identify Gαt and EF-Tu specific key residues. Mutations of these residues significantly disrupt the couplings in Gαt and EF-Tu but have no or little effect in Ras. Our results are largely consistent with findings from experimental mutagenesis, with a number of dynamical disrupting mutants have been shown to have altered activities in either Ras or Gα. Our new predictions can be promising targets for future experimental testing.
Principal component analysis (PCA) of Ras, Gαt/i and EF-Tu crystallographic structures reveals functionally distinct conformations
Previous PCA of 41 Ras crystallographic structures revealed distinct GDP, GTP and intermediate mutant conformations [13]. Updating this analysis to include the 121 currently available crystallographic structures (S1 Table) reveals consistent results but with two additional conformations now evident (Fig 2A). In addition to GDP (green in Fig 2A), GTP (red), and mutant forms, GEF-bound nucleotide free (purple) and so-called 'state 1' forms (orange) are now also apparent. In the GEF-bound form, the SI region is displaced in a distinct manner-12Å away from the nucleotide-binding site coincident with the insertion of a helix of GEF into the PL-SI cleft. The state 1 GTP-bound form was first observed via NMR and later high-resolution crystal structures were solved [28][29][30]. In contrast to the canonical GTP-bound conformation (red), the state 1 form (orange) lacks interaction between the two switches and the γphosphate of GTP, resulting in a moderate 7Å displacement of SI away from its more closed GTP conformation.
The first two PCs capture more than 75% of the total mean-square displacement of all 121 Ras structures. Residue contributions from SI and SII dominate PC1 and PC2 (Fig 2D). The height of each bar in Fig 2D displays the relative contribution of each residue to a given PC. PC1 mainly describes the opening and closing of SI-more open in GEF-bound and state 1 forms, and more closed in nucleotide bound structures. PC1 also captures smaller scale displacement of L8 (the loop between β5 and α4), which resides 5Å closer to the nucleotide-binding pocket in the GEF-bound structures than the GTP-bound structure set. PC2 depicts SII displacements and clearly separates GTP from GDP bound forms (red and green, respectively). As we expect, the lack of γ-phosphate in the GDP releases SII from the nucleotide, whereas in the GTP form SII is fixed by the hydrogen bond of the backbone amide of G60 with the γ-phosphate oxygen atom. This is also shown in the state 1 form where the hydrogen bond is disrupted with SII moderately displaced from the nucleotide (4Å on average from the canonical GTP group structures).
PCA of 53 available Gαt/i structures described recently (S2 Table) revealed three major conformational groups: GTP (red in Fig 2B), GDP (green) and GDI (GDP dissociation inhibitor; blue) bound forms [18]. The first two PCs capture over 65% of the total variance of Cα atom positions in all structures. The dominant motions along PC1 and PC2 are the concerted displacements of SI, SII and SIII in the nucleotide-binding region as well as a relatively smallscale rotation of the helical domain with respect to RasD (Fig 2E).
PC1 separates GDI-bound from non-GDI bound forms. In GDI-bound structures the GDI interacts with both the HD and the cleft between SII and SIII of the Ras-like domain, increasing the distance between SII and SIII. Similar to Ras, PC2 of Gαt/i clearly distinguishes the GTP and GDP-bound forms, where again the unique γ-phosphate (or equivalent atom in GTP analogs) coordinates SI and SII. In addition, the SIII is displaced closer to the nucleotide, effectively closing the nucleotide-binding pocket.
PCA of 23 available full-length EF-Tu structures reveals distinct GTP and GDP conformations (S3 Table). PC1 dominantly captures nearly 95% of the total structural variance of Cα atom positions (Fig 2C). It mainly describes the dramatic conformational transition in SI as well as the large rotation of two β-barrel domains D2 and D3 (Fig 2F). In the GTP-bound form, the C-terminal SI is coordinated to the γ-phosphate and Mg 2+ ion, forming a small helix near SII. Meanwhile, D2 and D3 are close to RasD and create a narrow cleft with SI, serving as the binding site for tRNA [31]. In the GDP-bound form, the C-terminal helix in SI unwinds and forms a β-hairpin, protruding towards D2 and D3 [32]. The highly conserved residue T62 (T35 in Ras) of EF-Tu moves more than 10Å away from its position in the GTP form and loses interaction with the Mg 2+ ion. In addition, D3 rotates towards SI and D2 moves far away from the Ras-like domain. In contrast to PC1, PC2 only captures a very small portion (3.59%) of the structural variance in EF-Tu (Fig 2F). The major conformational change along PC2 is a smallscale rotation of D2 and D3 with respect to RasD in the GTP form.
PCA of Ras, Gαt/i and EF-Tu demonstrates that the binding of different nucleotides and protein partners can lead to a rearrangement of global conformations in a consistent manner. In particular, within RasD, these three families display conserved nucleotide-dependent conformational distributions with major contributions from the switch regions. In the GTPbound form of these proteins, SI and SII are associated with the nucleotide through interacting with γ-phosphate. Despite these similarities, critical questions about their functional dynamics remain unanswered: How does nucleotide turnover lead to allosteric regulation of distinct partner protein-binding events? To what extent are the structural dynamics of these proteins similar beyond the switch region displacements evident in accumulated crystal structures? How do distal disease-associated mutations affect the functional dynamics for each family and are there commonalities across families? In the next section, we report MD simulations that address these questions, which are not answered by accumulated static experimental structures.
MD simulations reveal distinct nucleotide-associated flexibility and crosscorrelation near functional regions
MD simulations reveal distinct nucleotide-associated flexibility at known functional regions. Representatives of the distinct GTP and GDP-bound conformations of Ras, Gαt and EF-Tu were selected as starting points for MD simulation. Five replicated 80-ns MD simulations of these three proteins for each state (GTP and GDP totaling 2.4μs; see Materials and Methods) exhibit high flexibility in the SI, SII, SIII/α3 and loop L3, L7, L8 and L9 regions (Fig 3A-3C). The Cα atom root-mean-square fluctuation (RMSF) in Gαt shows that SI is significantly more flexible in the GDP-bound state (Fig 3B). The C-terminal SI of Ras and EF-Tu, corresponding to the shorter SI in Gαt, is also more flexible with GDP bound (Fig 3A & 3C). Interestingly, the middle part of SI in Ras and EF-Tu show higher fluctuations in the GTP-bound state. Moreover, SII is more flexible in the GTP-bound state in Ras. Detailed inspection reveals that Structural dynamics of GTPases SII always stays away from the nucleotide during the GDP-bound state MD simulations, whereas SII sometimes moves close to and interacts with the unique γ-phosphate of GTP, leading to higher flexibility in the GTP-bound state. In contrast, the flexibility of SII in Gαt has no significant difference between states, whereas SII in EF-Tu is less flexible with GTP bound. This is due to the relatively compact interactions between SII and the unique D2 and D3 in the GTP-bound EF-Tu. In fact, D2 and D3 show extremely higher flexibility in the GDP state ( Fig 3C). Overall, the nucleotide-dependent flexibility of RasD in Ras, Gαt and EF-Tu are quite similar except for SII.
The cross-correlations of atomic displacements derived from MD simulations also manifest conserved nucleotide-associated coupling in these three systems (Fig 3D-3F). In both Ras and Gαt, significantly stronger couplings within the catalytic lobe 1 between PL, SI and SII can be found only in the GTP-bound state (red rectangles in Fig 3D & 3E). Interestingly, a unique inter-lobe coupling between SII and SIII/α3 also characterizes the GTP-bound state in both systems (blue rectangles in Fig 3D & 3E). In EF-Tu, the intra-lobe 1 and inter-lobe couplings are similar between states (red and blue rectangles in Fig 3F). Intriguingly, a lot of negative correlations between D2 and RasD of EF-Tu are found in the GDP-bound state, indicating the swing motion of D2 with respect to RasD during MD simulations (lower triangle in Fig 3F).
Correlation network analysis displays similar nucleotide-associated correlation in Ras, Gαt and ET-Tu
Consensus correlation networks for each nucleotide state were constructed from the corresponding replicate MD simulations. In these initial networks, each node is a residue linked by edges whose weights represent their respective correlation values averaged across simulations (see Materials and Methods). These residue level correlation networks underwent hierarchical clustering to identify groups of residues (termed communities) that are highly coupled to each other but loosely coupled to other residue groups. Nine communities were identified for Ras and eleven for Gαt and EF-Tu (Fig 4). The two additional family specific communities not present in Ras correspond to two regions of HD in Gαt and D2 and D3 in EF-Tu.
In the resulting community networks the width of an edge connecting two communities is the sum of all the underlying residue correlation values between them. Interestingly, Ras, Gαt and EF-Tu community networks can be partitioned into two major groups (dashed lines in Fig 4) corresponding to the previously identified lobes for Ras and the RasD in Gαt [13,18]. The boundary between lobes is located at the loop between α2 and β4. In these proteins, lobe1 includes the nucleotide-binding communities (PL, SI and SII) as well as the N-terminal β1-β3 and α1 structural elements. Lobe2 includes α3-α5, L8 and the C-terminal β4-β6 strands.
Comparing the GTP and GDP community networks of these three proteins reveals common nucleotide-dependent coupling features. In particular, for Ras and Gαt, comparing the relative strength of inter-community couplings in GTP and GDP networks using a nonparametric Wilcoxon test across simulation replicates reveals common significantly distinct coupling patterns (colored edges in Fig 4A & 4B). Within lobe1 stronger couplings between PL, SI and SII are observed for the GTP state of both families. This indicates that the γ-phosphate of GTP leads to enhanced coupling of these proximal regions. This is consistent with our PCA results above, where PC2 clearly depicts the more closed conformation of SI and SII in the GTP bound structures (Fig 2D & 2E). In addition, a significantly stronger inter-lobe correlation between SII and α3 is evident for the GTP state of both families, which is not available from analysis of the static experimental ensemble alone. This indicates that nucleotide turnover can lead to distinct structural dynamics not only at the immediate nucleotide-binding site in lobe 1 but also at the distal lobe 2 region.
Intriguingly, similar patterns of intra and inter-lobe dynamic correlations are observed in EF-Tu (Fig 4C). Within lobe1, significantly stronger correlations between PL-SI and PL-SII are evident in the GTP state, although SI-SII coupling becomes weaker in this state. In fact, the C-terminal β-hairpin of SI moves towards and interacts extensively with SII and D3 in the GDP bound state, leaving the nucleotide-binding site widely open. Moreover, our results reveal that SII and SIII/α3 of EF-Tu are more tightly coupled in the GTP state, resembling the strong inter-lobe couplings in the GTP bound Ras and Gαt. It is worth noting that this conserved structural dynamic coupling is evident only from the comparative network analysis and is not accessible from PCA of crystal structures.
The common residue-wise determinants of structural dynamics in Ras, Gαt and EF-Tu
Comparative network analysis highlights the common residue-wise determinants of nucleotide-dependent structural dynamics. Besides correlations within lobe1, inter-lobe couplings are also significantly stronger in the GTP state networks of Ras, Gαt and EF-Tu. Inspection of the residue-wise correlations between communities reveals common major contributors to the SII-α3 couplings in the three proteins (red residues in S4 Table). In particular, M72 Ras in SII and V103 Ras in α3 act as primary contributors to inter-lobe correlations in Ras. Interestingly, the equivalent residues in the other two systems, F211 Gαt or I93 EF-Tu in SII and F255 Gαt or V126 EF-Tu in α3/SIII also contribute to the inter-lobe couplings. We further examined the importance of these residues by MD simulations of mutant GTP-bound systems. Results indicate that each single mutation M72A Ras and V103A Ras can significantly reduce the couplings between SI and PL, indicating that these mutations disturb couplings at distal sites of known functional relevance (Fig 5A & 5D). Moreover, the cognate mutations F211A Gαt and F255 Gαt in Gαt not only decouple SI and PL but also SI and SII (Fig 5B & 5E). Similarly, the analogous mutation I93A EF-Tu decreases the correlations between PL and SI, whereas V126A EF-Tu decouples PL and SII (Fig 5C & 5F). The simulation results indicate that single alanine mutation of residues contributing to SII-α3 couplings diminishes the couplings of the nucleotide binding regions, and this allosteric effect is common in all the three proteins.
Inter-lobe couplings that are distal from the nucleotide binding regions are also shown to be critical for the nucleotide dependent dynamics in Ras, Gαt and EF-Tu. By inspecting the residue level couplings between L3 and α5, we identified common distal inter-lobe couplings in the three proteins. Mutational simulations indicate that the substitutions K188A Gαt and D337A Gαt significantly decouple SI from the PL and SII regions (Fig 6B & 6E). Interestingly, the mutations K188A Gαt and D337A Gαt have been reported to cause a 6-fold and 2-fold increase in nucleotide exchange, respectively, but no direct structural dynamic mechanism was established [19]. We further tested mutations of analogous residues in Ras. We considered both D47 Ras and E49 Ras as the equivalent residues to K188 Gαt (due to the longer L3 region of Ras), and R164 Ras as the equivalent residue to D337 Gαt . Both double mutation D47A/E49A Ras and single mutation R164A Ras significantly reduce the correlations between PL and SI (Fig 6A & 6D). We note that the functional consequences of mutating these residues in Ras has been highlighted in a previous study, in which the salt bridges between D47/E49 Ras in L3 and R161/ R164 Ras in α5 were shown to be involved in the reorientation of Ras with respect to the plasma membrane, and enhanced activation of MAPK pathway [15]. Moreover, substitutions of analogous residues R75A EF-Tu (L3) and D207A EF-Tu (α5) also significantly reduce the couplings between PL and SI (Fig 6C & 6F). Our results indicate that the conserved interactions between L3 and α5 are important for maintaining the close coordination of the distal SI, SII and PL around the nucleotide, and this is common to these three proteins.
Network analysis identifies family-specific residue substitutions that can also perturb structural dynamics
Comparison of the GTP-bound residue-wise networks of Ras, Gαt and EF-Tu reveals that the N-terminus of α3 strongly couples SII only in Gαt and EF-Tu. In particular, we identified residues R201 Gαt or A86 EF-Tu (SII) and E241 Gαt or Q115 EF-Tu (α3) as underlying these strong couplings (blue residues in S4 Table). These residues are specific to Gαt and EF-Tu because the corresponding residues E62 Ras in SII and K88 Ras in α3 have no contribution in Ras (green residues in S4 Table). Mutational MD simulations indicate that substitutions E241A Gαt and Q115A EF-Tu have a similar drastic effect on the coupling of nucleotide binding regions (S1 Fig). In particular, the couplings between PL, SII and PL are all significantly reduced (S1B & S1C Fig). We note that E241A Gαt in Gαs (the α subunit of the stimulatory G protein for adenylyl cyclase) was previously reported to impair GTP binding but the structural basis for this allosteric effect has been unknown [33,34]. Our results indicate that weakened correlations of the nucleotide-binding regions in E241A Gαt as a consequence of allosteric mutations in SIII/α3 and SII likely underlie the reported impaired GTP binding. Moreover, we identified residue E232 Gαt as a Gαt-specific primary contributor to the inter-lobe couplings in SIII, which has no Table). The simulation of mutation E232A Gαt shows diminished couplings between PL, SI and SII, as well (S2A Fig). Similar effects of mutations R201A Gαt and D234A Gαt are also observed (S2B & S2C Fig).
Mutations of the counterpart residues E62A Ras and K88A Ras result in no significant change in the coupling of nucleotide binding loops in Ras (S1A Fig). Collectively these findings indicate that in Gαt and EF-Tu both N-and C-terminal α3 positions dynamically couple with SII, whereas in Ras the communication between α3 and SII is mainly through the C-terminus of α3. In addition, our results suggest that SIII plays a unique role in Gαt not only mediating the couplings between the two lobes but also allosterically maintaining the tight correlations between SI, SII and PL.
Discussion
In this work, our updated PCA of Ras structures captures two new conformational clusters representing the GEF-bound state and "state 1", respectively, in addition to the canonical GTP and GDP forms. By comparing the Ras PCA to PCA of Gαt/i and EF-Tu, we reveal common nucleotide dependent collective deformations of SI and SII across G protein families. Our extensive MD simulations and network analyses reveal common nucleotide-associated conformational dynamics in Ras, Gαt and EF-Tu. Specifically, these three systems have stronger intra-lobe1 (PL-SI and PL-SII) and inter-lobe (SII-SIII/α3) couplings in the GTP-bound state. Meanwhile, with the network comparison approach we further identify residue-wise determinants of commonalities and specificities across families. Residues M72 Ras (SII), V103 Ras (α3), D47/E49 Ras (L3) and R164 Ras (α5) are predicted to be crucial for inter-lobe communications in Ras. Mutations of these distal residues display decreased coupling strength in SI-PL. Interestingly, the analogous residues in the other two proteins, F211 Gαt /I93 EF-Tu (SII), F255 Gαt /V126 EF-Tu (α3), K188 Gαt /R75 EF-Tu (L3) and D337 Gαt /D207 EF-Tu (α5) also have important inter-lobe couplings and show similar decoupling effects upon alanine mutations. Besides the key residues that are common in the three systems, residues mediating inter-lobe couplings only in Gαt and EF-Tu are identified. These include R201 Gαt /A86 EF-Tu and E241 Gαt / Q115 EF-Tu , whose cognates in Ras do not have significant effect on the nucleotide-binding regions upon mutation. In addition, Gαt specific residue E232 Gαt in SIII (which is missing in Ras and EF-Tu) is identified to be important to the couplings of the nucleotide-binding regions. Importantly, some of our highlighted mutants (D47A/E49A Ras , K188A Gαt , D207A Gαt and R241A Gαt ) have been reported to have functional effects by in vitro experiments. Our analysis provides insights into the atomistic mechanisms of these altered protein functions.
Using differential contact map analysis of crystallographic structures, Babu and colleagues recently suggested a universal activation mechanism of Gα [27]. In their model, structural contacts between α1 and α5 act as a 'hub' mediating the communications between α5 and the nucleotide. These contacts are broken upon the binding of receptor at α5, leading to a more flexible α1 and the destabilization of nucleotide binding. According to their studies, however, these critical α1/α5 contacts do not exist in Ras structures. Thus, they concluded that, unlike Gα, α5 in Ras does not have allosteric regulation of the nucleotide. It is worth noting that Babu's work is purely based on the comparison of structures without considering protein dynamics. In fact, our study indicates that functionally important communications may not be directly observed from static structures. For example, the inter-lobe couplings between SII and L3 (B), D337A Gαt in α5 (E), R75A EF-Tu in L3 (C) and D207A EF-Tu in α5 (F) have similar effects in the nucleotide-binding region-significantly reducing the couplings between PL, SI and SII. https://doi.org/10.1371/journal.pcbi.1006364.g006 Structural dynamics of GTPases SIII/α3 are not captured by PCA of structure ensemble, but they are clearly shown in our network analysis of structural dynamics. By inspecting structural dynamics, we find that α5 in Ras actually plays an allosteric role, in which point mutation (R164A) substantially disrupts the couplings in the nucleotide binding regions. The potential salt bridges between D47/E49 in L3 and R161/R164 in α5 are shown in S3 Fig. A previous study of Ras GTPases via an elastic network model-normal mode analysis (ENM-NMA) revealed similar bilobal substructures and found that functionally conserved modes are localized in the catalytic lobe1, whereas family-specific deformations are mainly found in the allosteric lobe2 [35]. The subsequent study via MD, in constrast, indicated that the conformational dynamics of Ras and Gαt are distinct, especially in the GDP state [36]. We note that in that study only a single MD simulation trajectory was analyzed, which is insufficient to assess the significance of the observed difference. Moreover, few atomistic details were given in that work. In our study, we make improvements by building ensemble-averaged networks based on multiple MD simulations instead of a single trajectory. This increases the robustness of the networks and largely reduces statistical errors. In addition, our correlation analysis provides residue wise predictions of potential important positions that mediate communications between functional regions. Overall, separation of functionally conserved and specific residues in conformational dynamics provides us unprecedented insights into protein evolution and engineering.
Crystallographic structures preparation
Atomic coordinates for all available Ras, Gαt/i and EF-Tu crystal structures were obtained from the RCSB Protein Data Bank [37] via sequence search utilities in the Bio3D package version 2.2 [38,39]. Structures with missing residues in the switch regions were not considered in this study, resulting in a total of 143 chains extracted from 121 unique structures for Ras, 53 chains from 36 unique structures for Gαt/i and 34 chains from 23 unique structures for EF-Tu (detailed in S1-S3 Tables). Prior to analyzing the variability of the conformational ensemble, all structures were superposed iteratively to identify the most structurally invariable region. This procedure excludes residues with the largest positional differences (measured as an ellipsoid of variance determined from the Cartesian coordinate for equivalent Cα atoms) before each round of superposition, until only invariant "core" residues remained [40]. The identified "core" residues were used as the reference frame for the superposition of both crystal structures and subsequent MD trajectories.
Principal component analysis
PCA was employed to characterize inter-conformer relationships of both Ras and Gαt/i. PCA is based on the diagonalization of the variance-covariance matrix, S, with element S ij built from the Cartesian coordinates of Cα atoms, r, of the superposed structures: where i and j enumerate all 3N Cartesian coordinates (N is the number of atoms being considered), and <�> denotes the average value. The eigenvectors, or principal components, of S correspond to a linear basis set of the distribution of structures, whereas each eigenvalue describes the variance of the distribution along the corresponding eigenvector. Projection of the conformational ensemble onto the subspace defined by the top two largest PCs provides a low-dimensional display of structures, highlighting the major differences between conformers.
Molecular dynamics simulations
Similar MD simulation protocols as those used in [18] were employed. Briefly, the AMBER12 [41] and corresponding force field ff99SB [42] were exploited in all simulations. Additional parameters for guanine nucleotides were taken from Meagher et al. [43]. The Mg 2+ �GDPbound Ras crystal structure (PDB ID: 4Q21), Gαt structure (PDB ID: 1TAG) and EF-Tu structure (PDB ID: 1TUI) were used as the starting point for GDP-bound simulations. The Mg 2+ � GNP (PDB ID: 5P21), the Mg 2+ �GSP (PDB ID: 1TND) and the Mg 2+ �GNP (PDB ID: 1TTT) bound structures were used as the starting point for GTP-bound simulations of Ras, Gαt and EF-Tu, respectively. These structures were identified as cluster representatives from PCA of the crystallographic structures. Prior to MD simulations, the sulfur (S1γ)/nitrogen (N3β) atom in the GTP-analogue was replaced with the corresponding oxygen (O1γ) / oxygen (O3β) of GTP. All Asp and Glu were deprotonated whereas Arg and Lys were protonated. The protonation state of each His was determined by its local environment via the PROPKA method [44]. Each protein system was solvated in a cubic pre-equilibrated TIP3P water box, where the distance was at least 12Å from the surface of the protein to any side of the box. Then sodium ions (Na + ) were added to neutralize the system. Each MD simulation started with a four-stage energy minimization, and each stage employed 500 steps of steepest descent followed by 1500 steps of conjugate gradient. First, the atomic positions of ligands and protein were fixed and only solvent was relaxed. Second, ligands and protein side chains were relaxed with fixed protein backbone. Third, the full atoms of ligands and protein were relaxed with fixed solvent. Fourth, all atoms were free to relax with no constraint. Subsequent to energy minimization, 1ps of MD simulation was performed to increase the temperature of the system from 0K to 300K. Then 1ns of simulations at constant temperature (T = 300K) and pressure (P = 1bar) was further performed to equilibrate the system. Finally, 80ns of production MD was performed under the same condition as the equilibration. For long-range electrostatic interactions, particle mesh Ewald summation method was used, while for short-range non-bonded Van der Waals' interactions, an 8Å cutoff was used. In addition, a 2-fs time step was use. The center-of-mass motion was removed every 1000 steps and the non-bonded neighbor list was updated every 25 steps.
We performed a total of 1,920 ns of MD simulation and analyzed results from multiple production phase 80ns simulations for each of our 3 systems, including the wild type in two nucleotide states along with 5 mutant ras, 8 mutant Gαt and 5 mutant EF-Tu systems (see full listing in S5 Table). The RMSD time courses for the above systems is shown in S4 Fig.
Correlation network construction
Consensus correlation networks were built from MD simulations to depict dynamic couplings among functional protein segments. A weighted network graph was constructed where each node represents an individual residue and the weight of edge between nodes, i and j, represents their Pearson's inner product cross-correlation value cij [45] during MD trajectories. The approach is similar to the dynamical network analysis method introduced by Luthey-Schulten and colleagues [46]. However, instead of using a 4.5Å contact map of non-neighboring residues to define network edges, which were further weighted by a single correlation matrix, we constructed consensus networks based on five replicate simulations in the same way as described before [18].
Network community
Hierarchical clustering was employed to identify residue groups, or communities, that are highly coupled to each other but loosely coupled to other residue groups. We used a betweenness clustering algorithm similar to that introduced by Girvan and Newman [47]. However, instead of partitioning according to the maximum modularity score, which is usually used in unweighted networks, we selected the partition closest to the maximum score but with the smallest number of communities (i.e. the earliest high scoring partition). This approach avoided the common cases that many small communities were generated with equally high partition scores. The resulting networks under different nucleotide-bound states showed largely consistent community partition in Ras, Gαt and EF-Tu, with differences mainly localized at the nucleotide binding PL, SI, SII and α1 regions. To facilitate comparison between states and families, the boundary of these regions was re-defined based on known conserved functional motifs. Re-analysis of the original residue cross-correlation matrices with the definition of communities was then performed. Only inter-community correlations were of interest, which were calculated as the sum of all underlying residue correlation values between two given communities satisfying that the smallest atom-atom distance between corresponding residue pairs was less than 4.5Å (for Gαt and EF-Tu) or 6 Å (for Ras) for more than 75% of total simulation frames. A larger cutoff was selected for Ras because the overall residue level correlations are weaker in Ras. A standard nonparametric Wilocox test was performed to evaluate the significance of the differences of inter-community correlations between distinct states. Table. Residue-wise contributions to inter-community couplings. The numbers represent the residue-wise contributions to inter-community couplings. For example, the sum of correlations between residue M72 in SII and all residues in SIII/ α3 is 1.19 (after filtering by contact map). The first row contains common counterpart residues (red) connecting SII and SIII/α3 in three proteins. The second row contains family-specific functional residues: residues in Gαt and EF-Tu (blue) contribute to the dynamic correlations between SII and SIII/α3, whereas their counterparts in Ras (green) have no contributions. The third row contains Gαt specific residue in SIII, which has no counterparts in the other two proteins. (DOCX) S5 | 7,390.6 | 2017-02-03T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Extending the Multiple Discrete Continuous (MDC) modelling 1 framework to consider complementarity, substitution, and an 2 unobserved budget
Abstract
Introduction
Many choices can be represented as multiple discrete continuous decisions.In these, a decision maker faces a finite set of alternatives, and must choose how much to "consume" of each one, potentially consuming none, one, or multiple alternatives.Examples of these situation include activities performed during a day, grocery shopping, investment allocation, etc. Traditional choice models are not well suited for these situations, as they only allow the choice of a single alternative.
Continuous models, on the other hand, often underestimate the probability of zero consumption for individual alternatives, also known as the "corner solution".Joint models, where the continuous choice is conditional on the discrete one, usually lack a strong grounding in economic theory, though there are exceptions (Hausman et al., 1995).
The Karush-Kuhn-Tucker multiple discrete continuous (MDC) consumer demand models (Bhat, 2008(Bhat, , 2018;;Chintagunta, 1993;Hanemann, 1978;Kim et al., 2002;Mehta and Ma, 2012;Phaneuf and Herriges, 1999;Song and Chintagunta, 2007;Wales and Woodland, 1983) attend to the issues mentioned in the previous paragraph.These models begin by explicitly formulating the consumer utility maximisation problem, assuming either a direct or indirect utility function with associated randomness.Then the optimal solution is derived through the use of Karush-Kuhn-Tucker conditions.Finally, the likelihood function of these conditions is written given the distributional assumptions on the utility function.Nowadays, one of the most popular models of this category is the Multiple Discrete Continuous Extreme Value (MDCEV) model (Bhat, 2008).It has been applied in different areas, such as transport (Jäggi et al., 2012), time use (Enam et al., 2018), social interactions (Calastri et al., 2017), alcohol purchase (Lu et al., 2017), energy consumption (Jeong et al., 2011), investment decisions (Lim and Kim, 2015), household expenditure data (Ferdous et al., 2010), price promotions (Richards et al., 2012), and tourism (Pellegrini et al., 2017).
In this paper, we propose two extensions to the MDC modelling framework.First, we propose a new non-additive functional form for the utility that includes explicit complementarity and substitution effects.Secondly, we present an MDC model formulation that does not require the definition of a budget, while still allowing for explicit complementarity and substitution.The second approach is a suitable approximation of a full MDC model for (the relatively common) situation where the expenditure on all alternatives that are included in the model (i.e.inside goods) is small compared to the overall budget, which allows us to drop the budget from the model likelihood.To allow for a tractable likelihood function, we do not include a stochastic error term in the marginal utility of the outside good in any of the two proposed models.
Substitution and complementarity define relationships between the demand for pairs of products.
If the demand for one of them increases, then the demand for the other is reduced in the case of substitution and increased in the case of complementarity (Hicks and Allen, 1934).While the budget constraint naturally induces substitution between products due to income effects, this is only an indirect effect.The inclusion of complementarity and substitution is necessary for a more realistic representation of behaviour in applications as diverse as time use or grocery shopping.
For example, in the first case, it could be that going to the cinema makes it more likely for individuals to also eat at a restaurant.In the second case, it could be that products such as pasta and tomato sauce are usually bought together.On the other hand, it could be that the more hours an individual works, the fewer hours they allocate to leisure activities; or purchasing more bread leads to a reduction in the consumption of biscuits.
Concerning the budget, while determining it can be easy in some applications, it can be challenging in others.For example, in purchase decisions, the budget will rarely be an individual's full income, as there is likely mental accounting and recurring expenses to account for, all of which are not observable.Investment decisions face a similar problem, as the total budget may expand or shrink as a function of expected performance of the investment alternatives.There are other scenarios where even the simple definition of a budget is problematic, for example when modelling the number of recreational trips during a year, or the number of activities performed by an individual during a week.The problem becomes more acute in forecasting.Any predictions from a model require a budget, and predicting the budget, e.g. the income of individuals in the future, is another problem in itself, and introduces cascading errors in the forecast values.
While other models including complementarity and substitution effects through non-additive separable utility functions have been proposed in the literature, they either require complementarity and substitution effects to add up to zero (Song and Chintagunta, 2007), or pose specific constraints on their parameters, making either estimation or model transferability difficult (Bhat et al., 2015;Mehta and Ma, 2012;Pellegrini et al., 2021a).Models with implicit (also called infinite) budget have also been proposed by Bhat (2018) and ?for models with neither complementarity or substitution effects.A detailed comparison between the models in this paper and those already in the literature is presented in section 5.
The remainder of this document is structured as follows.The next section introduces the formulation, derivation, likelihood function and forecasting algorithm of the model with complementarity and substitution.Section 3.2 presents the same for the model with complementarity, substitution and an implicit budget.Section 4 discusses the identification of both model parameters, some constraints that theory and estimation imposes on them, and compares the forecasting performance of both models to each other.Section 5 compares the proposed models' formulation to that of similar models in the literature.Section 6 presents applications of the proposed models to four different datasets, dealing with time use, household expenditure, supermarket scanner data, and number of trips, respectively.The paper closes with a brief summary of the proposed model formulations capabilities and limitations.
2 An MDC model with complementarity and substitution
Model formulation
Consider the classical (consumer) utility maximisation problem, where an individual n must decide what products k to consume from a set of alternatives, by maximising his or her utility subject to a budget constraint (Eqn.1).
where n = 1...N indexes individuals and k = 1...K alternatives, x n = [x n0 , x n1 , ..., x nK ] is a vector grouping the consumed amount of each alternative (product), p nk is the price of alternative k faced by individual n, and B n is the total budget available to individual n. x n0 is an outside or numeraire good, i.e. a good that aggregates all consumption outside of the category of interest.
For example, if the researcher is interested in modelling demand for food, x n1 , ..., x nK would represent consumption of different food categories (the inside goods), while x n0 would represent the aggregate consumption of housing, transport, leisure, etc.It is usually assumed that p n0 = 1, so that x n0 becomes the total expenditure on categories other than the one of interest.To simplify the notation, we use this convention henceforth.It is assumed that the numeraire good is always consumed, so x n0 > 0 always.
The formulation in eqn. 1 is consistent with a two-stage budgeting approach, where the individual first allocates expenditure to broad groups (e.g.food, utilities, transport, entertainment, etc.) based on price indices representative for each group, followed by independent within-group allocations to individual products.According to Edgerton (1997), such an approach is sensible and subject to only small approximation errors when (i) the preferences for groups are weakly separable, i.e. the utility provided by each group is not affected by the level of consumption of other groups; and (ii) the group price indices being used do not vary too greatly with the utility or expenditure level.The first condition can be satisfied as long as the inside goods are reasonably separable from excluded goods.Edgerton (1997) argues that empirical and theoretical arguments support the fulfilment of the second condition.
We assume the following functional forms for the different parts of the utility function.
We take the definition of u k from Bhat (2008).In this formulation, ψ nk represents alternative k's base utility, i.e. its marginal utility at zero consumption.This parameter could be interpreted as the scale of the utility of product k.The γ k parameters, on the other hand, relate mainly to consumption satiation, by altering the curvature of alternative k's utility function.In general, a higher γ k indicates higher consumption of alternative k, when consumed.While a common interpretation is that ψ nk and γ k determine what and how much of alternative k to consume, respectively, this is not completely true.There is a level of interaction between these parameters, and in some circumstances a low value of ψ nk can be compensated by a high value of γ k (Bhat, 2008(Bhat, , 2018)).
Parameters ψ nk must always be positive, as they represent the marginal utility of alternatives at the point of zero consumption.We ensure this using the following definition. (5) where z n0 is a column vector of characteristics of the decision maker that are expected to correlate with that individual's marginal utility of the outside good (e.g.socio-demographics); α is a row vector of parameters representing the weights of those characteristics on the marginal utility of the outside good; z nk are attributes of alternative k; β k are vectors of parameters representing weights of those attributes on the alternative's base utility; and ε nk is a random disturbance term.
We only include random disturbances in the base utility of the inside goods, as this leads to a computationally tractable likelihood function.We discuss the inclusion of a random disturbance in the marginal utility of the outside good in Section 4.1.
The final component of the utility function, u kl (x nk , x nl ), captures the complementarity and substitution effects between inside goods.This particular functional form is inspired by the translog function, and previous formulations by Vásquez Lavín and Hanemann (2008) and Bhat et al. (2015).Figure 1 presents the behaviour of this component for a set of δ kl parameters, and different values of x nk and x nl , which are assumed to be equal.If δ kl > 0, there is complementarity between alternatives k and l, as this component will increase the overall utility.If δ kl < 0, there is a substitution effect between alternatives k and l, as u kl becomes more negative as x nk and x nl increase.If δ kl = 0, the consumption of both alternatives is independent of each other.The value of u kl is bounded to the interval [0, δ kl ), ensuring transferability of estimated models to other datasets, a point we discuss in Section 4.2.
In summary, the proposed MDC model has two main characteristics.First, it contains no stochastic error in the marginal utility of the outside good, allowing for a tractable likelihood function.Second, its non-additive utility function allows for interaction (complementarity and substitution) among alternatives.
Model derivation
To solve the optimisation problem, we begin by writing its Lagrangian (Eqn.6) and Karush-Kuhn-Tacker conditions of optimality (eqns.7 and 8).We drop the n subindex to simplify the notation.
Eqn. 8 will be an equality when alternative k is consumed (i.e.x * nk > 0, with x * nk the consumption at the optimum, i.e. the observed consumption).Eqn. 8 will be an inequality when x * nk = 0.In other words, the marginal utility of any consumed product k at the optimum level of consumption will be λ scaled by the alternative's price p nk .Instead, if the product is not consumed, its marginal utility will be lower.By combining eqns.7 and 8, we obtain: Replacing ψ 0 and ψ k by their definitions (Eqn.5), and isolating the random component ε k , we obtain Now, if we assume all ε k disturbances to follow identical and independent distributions, we only need to apply the Change of Variable Theorem from ε k to x k (only over the consumed alternatives) to obtain the likelihood function of the model.Then, if f and F are the density and cumulative distribution functions of ε k , respectively, we can write the likelihood function as follows: In this set of equations, |J| is the value of the determinant of the Jacobian J of vector −W m , where m indexes consumed alternatives.The elements of this Jacobian are defined in Eqn. 12 (i indexes rows, and j columns).No obvious compact form exists for this determinant.I x k >0 and I x k =0 are binary variables taking value 1 if x k > 0 or x k = 0, respectively, or zero in other case.
If no alternative is consumed, the Jacobian drops out of Eqn.11.
In the remainder of this paper, we assume all ε k disturbances to follow identical and independent Normal distributions with mean fixed to zero and a standard deviation σ, which is estimated.
Assuming other distributions is possible, where the use of Gumbel distribution leads to a closedform likelihood, but has the disadvantage of generating a high rate of outliers during prediction, due to the thick tails of the distribution.The Normal distribution, on the other hand, has thinner tails and it is a natural choice due to the Central limit theorem, while being computationally tractable.
Forecasting
Once the model has been estimated, forecasting requires solving the original maximisation problem proposed in eqn. 1 several times, each time using different draws of ε k from a Normal distribution with mean zero and standard deviation σ, and then averaging the result across these draws.
This must be done separately for each observation in the sample.The optimisation problem can be solved using any algorithm, with the Newton or gradient descent algorithms being the most common type.This forecasting procedure is demanding from a computational perspective, especially if a high number of draws are used for each individual.However, due to the forecast for each individual and draw being independent from one another, calculating them in parallel can significantly reduce the overall processing time.The software implementation in Apollo (ApolloChoiceModelling.com) uses parallel computing to speed up the forecasting.
3 An MDC model with complementarity, substitution and an implicit budget In this section we introduce an extension of the model presented in section 2, such that it does not require defining a budget.The formulation and derivation of the model is very similar to that presented in the previous section, so in this section we only highlights the points where the two models differ.
Model formulation
Considering the classical consumer utility maximisation problem described in eqn. 1, we now assume a different utility formulation for the outside good, while all other definitions remain as in the previous section (i.e. as in eqns.3, 4, and 5).
We assume a linear utility function for the outside good (eqn.13), as this will later on allow us to drop both the outside good consumption x 0 and the budget B from the final model formulation.
While a linear utility function does not comply with the law of diminishing marginal utility (a common assumption in demand models), it should be considered as an approximation of a function that does, when most of the budget is spent on the outside good, and only a relatively small amount is spent on the inside goods.In such a case, changes in the total expenditure of inside goods would lead to a relatively small change in the consumed amount for the outside good, and therefore a negligible change in the marginal utility of it.
More formally, we can write changes in the utility of the outside good using a second degree Taylor expansion as u 0 (x 0 + ∆) u 0 (x 0 ) + u 0 (x 0 )∆ + 1 2 u 0 (x 0 )∆ 2 , where u 0 and u 0 are the first and second derivatives of u 0 , respectively, and ∆ is a small change in the consumption of the outside good.If u 0 is continuous, monotonically increasing, and satisfies the law of diminishing returns, then lim x 0 →+∞ u 0 is a constant equal to or bigger than zero, because the slope must smoothly decrease as x 0 increases, without ever becoming negative.It then follows that lim x 0 →+∞ u 0 = 0. Therefore, for a large value of x 0 , we can assume that u 0 (x 0 ) is small, and approximate u 0 using a linear function, making u 0 ψ 0 .
Assuming a linear utility function for the outside good does not necessarily imply that all individuals have the same marginal utility for it, nor that absolutely no information on the budget can be included in the model.The proposed formulation allows for parameterisation of the ψ 0 parameter.The modeller could make ψ 0 a function of socio-demographics, or other proxies of the budget.For example, ψ 0 could be explained by an individual's full income, occupation, or their level of education.
Model derivation
Proceeding in the same way as in section 2.2, we first find a difference when calculating the derivative of the Lagrangean (Eqn.6) with respect to the outside good, as follows.
which combined with Eqn. 8 leads to the Eqn.15 Replacing ψ 0 and ψ k by their definitions (Eqn.5), and isolating the random component ε k , we obtain Assuming all ε k disturbances follow identical and independent distributions, and applying the Change of Variable Theorem from ε k to x k for the consumed alternatives, to obtain the likelihood function of the model, as described in eqn.11, except this time the definition of the Jacobian elements is as in eqn.17, with E i the same as in eqn.12.
Just as with the model with observed budget, we assume all ε k disturbances to follow identical and independent Normal distributions with mean zero and a standard deviation σ to be estimated.
Forecasting
Once the model has been estimated, forecasting requires solving the original maximisation problem proposed in Eqn. 1 several times, each time using different draws of ε nk from a Normal(0,σ) distribution, and then averaging the result across these draws.
To solve the optimisation problem we once again use the Lagrangian in Eqn.6 and the KKT conditions in eqns.14 and 8, leading us to Eqn. 15.Assuming an equality and isolating x k , we obtain where the definition of E k can be found in eqn.17, and where it depends on the value of all x n .
Eqn. 18 is a fixed point problem, i.e. a problem of the form x = h(x).According to the Existence and Uniqueness theorem, as the right part of Eqn.18 is continuous in x n over the closed interval [0, Bn p nk ], at least one solution to the problem exists.However, we cannot ensure that the solution is unique.We solve Eqn.18 through the following iterative approach: K ] to zero.
where S is the maximum number of iterations allowed, and τ indicates the convergence tolerance parameter, which can be set to the desired precision.This procedure must be performed multiple times for each observation, each time with a different set of draws for the ε k disturbances.Then results for each set of draws must be averaged.
As this model assumes a very large budget, in practice, there is no bound on the magnitude of the forecast consumption.Therefore, we recommend only forecasting for values of the explanatory variables in a reasonable vicinity of the values observed in the estimation dataset.What defines reasonable is difficult to quantifiy, but, for example, if an explanatory variable z 1 ∈ [0, 1] in the estimation dataset, forecasting for z 1 = 10 could lead to unreasonably high consumption levels.
This is similar to how linear models are usually valid only in the vicinity of values on which they were estimated.
Model properties
In this section, we discuss some of the most relevant properties of the model, namely the identifiability of its parameters, including the possibility of using random coefficients; some theoretical constraints on its parameters; and the performance of the model with implicit budget as compared to the model with observed budget.
Identification of parameters
When estimating the proposed models, the modeller should consider the following six points regarding identifiability of parameters.
First, observations who do not consume any inside good should not be excluded from the sample.Even though these observations do not provide any information on the value of ψ k , they do provide information of the value of ψ 0 in relation to the inside goods.
Second, there should be no constant (intercept) in the definition of ψ 0 , i.e. z 0 should not contain an element equal to 1 for every individual.As utility does not have any meaningful units, we require setting a base against which all other utilities are measured.To do this, we recommend setting the intercept of the outside good to zero.Any variable that changes across observations can be included in z 0 , even if they are not centred around zero.We recommend populating z 0 with characteristics of decision makers, such as socio-demographics.
In the case of the model with implicit budget (see section 3) we recommend including the individual's income in z 0 .Including income in this way does not imply that the budget is equal to the income, but only that the marginal utility of the outside good depends on it.We would expect a negative coefficient for income if included in ψ 0 , as an increase of income usually leads to increased overall consumption, and therefore a smaller marginal utility of the outside good.
In general, a negative coefficient α indicates that an increase in the corresponding explanatory variable leads to increased consumption.The opposite is true for a positive coefficient.
Third, just as most other MDC models, the two formulations presented in this paper are not scale-independent.This means that the magnitude of the dependent variable influences the results of the model.For example, expressing the dependent variable in grammes or kilogrammes might lead to different forecasts and marginal rates of substitution.This is due to the non-linear nature of the utility functions used in the models.We recommend testing different scalings of the dependent variable, favouring those making the dependent variable range between zero and five, so as to match the range of maximum variability of the transformation in u kl , which is mostly flat for values x k > 5 (see figure 1).
Fourth, in the case of the model with implicit budget, complementarity and substitution effects can be confounded with income effects.In the model with implicit budget, all interactions between the consumption of alternatives are captured by the δ kl parameters.The cause of interaction could be complementarity or substitution, but it could also be due to income effects.For example, a restricted budget could induce increased demand for an inexpensive product while decreasing the demand for an expensive one.This could be captured by the model as substitution between the two products.This problem will be attenuated if the budget is large in comparison with the expenditure on the inside good.
Fifth, concerning the number of complementarity and substitution parameters (δ kl ), while the model formulation defines one parameter per pair of products, the modeller can easily impose restrictions to reduce the number of parameters to estimate.For example, if alternatives can be grouped into non-overlapping sets, the modeller could impose all δ kl parameters to be the same within each group, and across the same pair of groups.Alternatively, the modeller could perform a Principal Component Analysis on the dependent variables, identifying the most important interactions between alternatives, and then estimating only those δ kl parameters and fix all others to zero (as done in section 6.2).These or other strategies are recommended when the number of alternatives is large.
Finally, as recommended by Manchanda et al. (1999), the proposed models allow for complementarity, substitution, and coincidence effects, both in a deterministic and random way.
Complementarity and substitution effects are captured by the δ kl parameters.Coincidence effects are shocks to demand influencing either one or multiple alternatives at the same time, and they can be captured by either ψ 0 (common shocks to all alternatives), or ψ k and γ k (independent shocks).All of these parameters allow for deterministic heterogeneity, for example defining δ kl as a function of socio-demographic characteristics.It is also possible to incorporate random heterogeneity in ψ k and γ k by using simulated maximum likelihood techniques (Train, 2009), but we do not recommend including such heterogeneity in ψ 0 nor δ kl as it could lead to violations of eqns.23 and 24 (see section 4.2).
To test identifiability of the model through simulation, we created 50 datasets using the generation process of the model with observed budget, and another 50 datasets using the generation process of the model with implicit budget.We then estimated the corresponding model on each generated dataset to check if we were able to recover the parameters used during data generation.
All datasets were composed of 500 observations with four alternatives each.All models shared the specification described in eqn.19, but with the value of their parameters randomly drawn on each occasion from the distributions defined in table 1.The range of parameters was influenced by other models estimated in section 6 and considerations discussed in section 4.2.All explanatory variables (z, x, y) followed a U(0,1) distribution, except for z 1 ∼ Bernoulli(0.5).Prices were drawn from a U(0.1, 1) distribution, while the budget was set to 10 for the models with observed budget.
Table 1: Distributions used to draw parameters from when simulating datasets.
Observed budget Implicit budget
Constraints on estimated parameters
The derivation of the likelihood function relies on the assumption of the utility function being monotonically increasing with decreasing marginal returns of consumption.In other words, it assumes ∂U ∂x k > 0, where U is the global utility.Failing to comply with this assumption renders the likelihood function invalid, as second order derivatives on the Lagrangean would have to be checked to make sure the critical point is not a minimum.Furthermore, it could lead to the existence of multiple local critical points, i.e. the solution may not be unique, which is once again contrary to the assumptions made during the derivation of the likelihood function.The marginal utility of the outside good is always positive in both models proposed in this paper.But the marginal utility with respect to an inside good will only be positive when the inequality in Eqn.
Additionally, the argument of the logarithm inside W k must be larger than zero, so as to avoid undefined operations.In the case of the model with observed budget, this translate into the inequality in Eqn. 21.And in the case of the model with implicit budget, it implies Eqn.22 must be satisfied.
These conditions are functions of x k , making their fulfillment dependent on the particular dataset at hand.We would like to instead derive dataset-independent conditions.This is possible by noting that the impact of x k in both conditions is bounded by its exponential transformation to the interval 0 ≤ e −x k ≤ 1 (because x k ≥ 0).This allows us to derive more general conditions than Eqns.20, 21 and 22 by analysing the extreme cases x k = 0 and x k = ∞, as the value of the conditions for all other x k values will fall between these.These extreme cases have the benefit of removing x k from the conditions.Table 2 summarises the results from this analysis.
All conditions in table 2 with zero on the right hand side are always fulfilled because ψ k , γ k , p k , ∆ − and ∆ + are all equal or bigger than zero.Eqn.20 for x k = ∞ will also always be true as zero is approached from the right (i.e. from positive values).Among the remaining conditions, Therefore, the sufficient conditions for the model with observed budget can be summarised as in eqn.23 Where: And the sufficient conditions for the model with implicit budget are summarised in eqn.24.
Conditions in eqns.23 and 24 are based on extreme cases, so they represent sufficient but not necessary conditions for the validity of the parameters.In other words, estimated parameters need only to comply with eqn.20, and with eqn.21 or 22, but satisfying eqn.23 or 24 guarantees that those conditions are met.
If individuals in the dataset behave rationally and in accordance with economic theory, then the estimated parameters should naturally comply with eqn.23 or 24.At the time of writing, we have not experienced any issues of running into inconsistent parameters, nor have we had to impose parameter constraints during estimation to enforce compliance with these equations.
Suitability of a linear utility for the outside good
In the model with implicit budget, we propose a linear utility for the outside good as an approximation of the case where expenditure on the inside goods (i.e.considered alternatives) is small compared to that on the outside (numeraire) good.In these cases, we expect only very small changes to the marginal utility of the outside good due to changes in the consumption of the inside goods.For example, consider consumption of the yoghurt product category.The expenditure on yoghurt will be small compared to the total expenditure on food, and even smaller compared to the entire disposable income of the household.By using the model with implicit budget, the modeller does not need to determine what the correct budget is, but only needs to know that total expenditure in the category of interest is small compared to the budget, whatever that may be.
If our interpretation is correct, then the forecast of the model with implicit budget should approach that of the model with observed budget when the expenditure on the outside good is large compared to that on the inside goods.We tested this assumption through simulation.We first created 30 different datasets of 500 observations each, assuming a data generation process with observed budget, i.e. using the model presented in section 2.Besides having an outside good, each dataset had four inside goods that were always available.The base utility of the outside good was set to zero, while the base utility of the inside goods was composed of a single constant, each drawn from U (−2, 0), i.e. a uniform distribution between -2 and 0. Satiation parameters γ k were drawn from U (0.5, 1.5), δ kl were drawn from a U (−0.01, 0.01), while price p k followed a U (0.1, 1), and the budget was set to 10 for every observation.We measured the fit of each model on each dataset using the Root Mean Squared Error (RMSE) of the forecast aggregate demand in the whole sample.Results are exhibited in figure 4.
As figure 4 shows, the fit of the model with implicit budget approaches that of the model with observed budget as the expenditure on the outside good increases.This indicates that the model with implicit budget is an appropriate approximation when the expenditure on the outside good is large relative to the expenditure on inside goods.
Comparison with other MDC formulations
The MDC models presented in this paper are not the first to include complementarity, substitution or an implicit budget in the literature.In this section, we discuss other MDC models with these properties, and compare them to the models proposed in this paper.We begin with a very brief review of models without complementarity or substitution (other than income effects), which form the basis for more flexible models.
No complementarity or substitution, and an observed budget
One of the most popular models in this category is the MDCEV model by (Bhat, 2008).It is derived from the same consumer optimisation problem proposed in eqn. 1, but using a different functional form for the utility components.While there are several possible formulations, the most common one is the alpha-gamma formulation, due to it allowing for an efficient forecasting algorithm (Pinjari and Bhat, 2011).In this case, the utility takes the form described in eqn.25, where α can either tend towards zero during the estimation process, or the modeller can fix it a priori.
Parameter interpretation in the MDCEV model is essentially the same as in the models described in this paper, except for two differences.First, the outside good's marginal utility contains no covariates, but only a stochastic error term, i.e. ψ 0 = e ε 0 .Second, α measures satiation across the whole choice set in MDCEV, and not the influence of covariates in the outside good's marginal utility as in the models proposed in this paper.And while it is possible to introduce explanatory variables into the base utility of the outside good in MDCEV models (either directly, or by including them with the same coefficient in all inside goods' base utility), it is not commonly done in practice.
By setting u kl = 0, the MDCEV model does not allow for pure complementarity or substitution effects, though product substitution can still take place due to income effects.Also, the form of u 0 requires the value of x 0 , and therefore the budget, to be observed.also present a similar model to MDCEV, but without an error term in the marginal utility of the outside good.Other models in this category include Habib and Miller (2008) and Habib and Miller (2009), who present models similar to that by Von Haefen and Phaneuf (2005).
Introducing complementarity and substitution through new functional forms
Vásquez Lavín and Hanemann (2008) propose a model formulation allowing for complementarity and substitution using a non-additively separable utility function and an observed budget.This formulation was later refined by Bhat et al. (2015), who called it the NASUF model.Beginning from the consumer optimisation problem set in eqn. 1, the utility components are defined as described in eqn.26.
The definition of u kl makes the NASUF utility function non-additive, effectively introducing complementarity and substitution effects.A positive value of θ kl is indicative of complementarity, while a negative one represents substitution, and θ kl = 0 implies no complementarity or substitution.Yet, this formulation has three main drawbacks.
The first drawback is that the utility function is valid only for some values of θ kl .Just as in the case of the models proposed in this paper, and as discussed in section 4.2, the derivation of the likelihood function assumes ∂U ∂x k > 0. For this to be true, the inequality in eqn.27 must be satisfied.
∂U ∂x
While it is possible to bound the value of parameters during estimation, the problem with the condition in eqn.27 is that it depends on the value of x k .As the logarithm is not a bounded function, whether or not this condition is satisfied will depend on the level of consumption x of each individual, making it impossible to assess the correctness of a model without associating it to a particular dataset.This hinders model transferability from one dataset to another, and jeopardises forecasting, as only scenarios that fulfil the condition above should be permissible forecasts.
If all individuals in the dataset behave in accordance with economic theory, then the parameters should automatically fulfill eqn.27.Yet, this does not prevent the estimation algorithm from trying parameter values violating eqn.27 during the parameter value search.Furthermore, calculating the likelihood of the model requires calculating the logarithm of the expression in eqn.
27, leading to an error if the expression is less or equal than zero.
The second issue with the solution proposed by Bhat et al. (2015) is that the stochasticity is introduced midway through the derivation of the model in the Karush-Kuhn-Tacker conditions, and not in the initial formulation of the model.While this is merely a formal issue, it does imply that the origin of the randomness is not clear, and it is not possible to easily associate it with unobserved variables or measurement errors, as would be the case in more traditional econometric models.
The third issue is that γ parameters have a role both in satiation and in the interaction term (i.e.complementarity and substitution) of the utility, making their interpretation difficult.A similar formulation was proposed by Lee and Allenby (2009), but using a quadratic function to incorporate satiation, complementarity, and substitution.This model only considers inside goods, defining the global utility as x l (we assume only one product per category to simplify the analysis).Note that θ kk is not restricted to zero in this case, as is in the models proposed in this paper.The validity of the formulation rests on the condition which depends on the value of x k , leading to the same issue already discussed in the context of the NASUF model.
Finally, Lee et al. (2010) propose a model allowing for asymmetric complementarity and substitution among categories of product.However, the formulation of the model does not satisfy the principle of weak complementarity (Maler, 1974), i.e. that an individual's utility is not influenced by the attributes of non-consumed goods or, in other words, that goods provide utility only through their use.This is a reasonable assumption in cases where non-use values are believed to be absent or small (see von Haefen ( 2004) for a more detailed discussion).
Introducing complementarity and substitution through the indirect utility function
While in this paper we derived MDC models from the direct utility function of consumers, it is also possible to make assumptions on the indirect utility instead, and then calculate the optimal consumption using Roy's identity, as described in section 3.1 of Chintagunta and Nair (2011).Song and Chintagunta (2007) propose an MDC model following the indirect utility approach, considering not only a set of alternatives, but grouping them into categories, and assuming that at most one alternative inside each category is consumed.Furthermore, this model imposes a symmetry constraint on its complementarity and substitution parameters, as described in eqn.
M l=0
θ kl = 0 ∀k (28) where θ kl represents the complementarity and substitution parameters (originally called β in Song and Chintagunta ( 2007)).Eqn.28 forces that, for each product, the amount of complementarity and substitution with other products needs to add up to zero.But there are no theoretical reasons for this to necessarily be the case in any given application.This requirement prevents, for example, for a product to only have complementarity with one other product, while not having substitution with any other product.Mehta and Ma (2012) propose a model with a similar formulation to that of Song and Chintagunta ( 2007), but without the symmetry constraint.However, it requires the matrix of complementarity and substitution parameters (whose elements are θ k l) to be positive semi-definitive.
Additionally, the likelihood function does not have a closed functional form, requiring multipledimension integration; and the number of parameters increases geometrically with the number of alternatives.
Introducing complementarity and substitution through correlation in utility functions
An alternative way to introduce complementarity and substitution into an MDC model is by introducing correlation across the utility of alternatives.This can be done in two ways: (i) by directly correlating the random error term ε in the utility function of each alternative across multiple alternatives, or (ii) by adding new random error terms common to the utility of multiple alternatives.Pinjari and Bhat (2010) use the first approach, using extreme value distributions to nest alternatives together into mutually exclusive subsets, allowing for perfect substitutes but not for complementarity.This approach was generalised by Pinjari (2011), by allowing for overlapping non-exclusive nests, but still limiting its applicability to complementarity.Bhat et al. (2013) makes ε follow a multivariate normal distribution across alternatives, allowing for flexible correlation patterns.Calastri et al. (2020a) follows the second approach, by using random intercepts and coefficients (β in our notation) correlated across alternatives.
As Pellegrini et al. (2021a) discuss, the main limitation of introducing complementarity and substitution through correlation in the utility functions of different alternatives is that of confounding effects.Indeed, using this approach it is impossible to discriminate between correlation due to common heterogeneity in preferences, from correlation due to complementarity and substitution.For example, two utilities could be positively correlated due to them sharing unobserved attributes, but not because the alternatives are complementary.
Two stage approaches to unobserved budgets
The necessity to observe the budget can lead to two separate issues.The first one is during estimation, in the case when the budget is not observed.This forces the modeller to assume some value for the budget before even estimating and MDC model.A common solution to this problem in past work has been to use the total expenditure as the budget.This is a strong assumption, as it implies that the total expenditure will not change as a function of prices or other attributes of the products.For example, it implies that consumers will spend the same amount regardless of the level of discount offered.
The second problem due to the necessity of an observe budget in MDC models manifests during forecasting.Forecasting for any future scenario requires exogenously defining a budget.
Any errors in the forecasting of the budget will cascade down to the MDC model, as shown in section 6.2.
In the literature, these problems have been addressed mostly through two-stage procedures, where in the first stage, a model is used to estimate (and predict) the budget, and in the second stage, a traditional MDC model with observed budget is used to allocate the budget to the different alternatives.Pinjari et al. (2016) proposes a two-stage approach.In the first stage, they use either a stochastic frontier or a log-linear regression to estimate the expected budget, and in the second stage they use the expected budget in an MDCEV model.They compare the performance of both approaches against arbitrarily determined budgets.When using the stochastic frontier method, they assume the budget to be an unobservable characteristic of decision makers, defined as the maximum amount they are willing to spend.This implies that the expected budget under this approach tends to be bigger than the total expenditure.The log-linear regression, on the other hand, attempts to predict total expenditure, so it leads to expected budgets that are of the same magnitude as the total expenditure.While both approaches offer similar performance, and both outperform the arbitrarily determined budget, the stochastic frontier approach leads to bigger expected budgets, therefore allowing for more variability in the forecast, as the total expenditure has room to grow if the attributes of the alternatives improve.This approach is also used by Pellegrini et al. (2021b).Dumont et al. (2013) propose a different two-step approach to estimate the budget.In the first step, they estimate a Structural Equation Model (SEM) where the budget is a latent variable, whose structural equation has socio-demographics as explanatory variables.The budget can have several indicators, such as average expenditure in the category during the last three months, expected expenditure in the future, and ownership of goods from the same category.Income is also considered a latent variable, with at least stated income as indicator.More formally, the latent budget B n and latent income I n relate as follows : where Z n are socio-demographics of individual n, y nj is indicator j of the budget, S n is the stated income, η n , ξ n , ε nj and ε ns are standard normal error terms, and ζ z , ζ I , λ j , σ j , λ s and σ s are parameters to be estimated.As expected, authors report lower log-likelihoods when using the SEM approximation to the budget than when using maximum expenditure, but they also do note an improvement in the MDC parameters significance levels.They do not report changes in forecast performance, making it difficult to evaluate the performance of the proposed approach.
Other MDC models with implicit budget
Other models in the literature have also used linear utility functions for the outside good, in the same way that in the models proposed in this paper.This functional form leads to a likelihood function that does not depend on the budget, effectively allowing for unobserved budgets.
In the context of the MDCEV model and its derivations, Bhat (2018) was the first one to propose using a linear utility function for the outside good.This functional form, however, was not motivated by the need to drop the budget from the model formulation, but it was used to allow for more separability between the parameters that determine the discrete choice (i.e.what to choose), from those that determine the continuous choice (i.e.how much to choose).Therefore, this property of the model is hardly explored in that paper.
More recently, Saxena et al. (2022) discussed the consequences of using a linear utility for the outside good in models with additively separable utility functions.Such a configuration leads to models that do not consider complementarity, substitution, nor income effects, therefore making demand from one product independent from another, unlike the model proposed in this paper (though it does allow for parameterising ψ 0 ).Similarly to our own advice, they recommend using a linear utility function for the outside good only when the total expenditure in the inside goods is no more than 35% of the budget (or more strictly, less than 5%).If the expenditure in inside goods is higher than those values, they find bias in the model estimates and poor forecasting performance.
While we did not find evidence of biased parameters in the proposed model (see figure 3), we did find evidence of poor forecast performance (see figure 4).The absence of parameter bias in the proposed model could be due to it including complementarity and substitution effects, and the fact that the error term follows a Normal distribution instead of a Gumbel distribution.
Model application and comparison
In this section we apply the proposed models to four different datasets.The first dataset records time use, where all participants face the same budget (24 hours a day), and all alternatives (in this case, activities) have the same price (one unit of time).This dataset allows us to measure how much fit is lost when using the model with implicit budget when the budget is known, as well as compare the proposed models against a model without complementarity nor substitution.The second dataset deals with household expenditure, where budgets vary between different households, but consumption is aggregated to categories, so prices are still unitary (one unit of money).
This dataset helps us illustrate how the fit of the model with observed budget degrades when the budget is misspecified, a case particularly relevant in forecasting.The third dataset contains scanner data from a supermarket, where both budgets and prices vary from one observation to the next.This dataset allows us to compare the sensitivity to price of the models with observed and implicit budget.The last dataset reports the number of trips performed by travellers for different purposes.This dataset is a case where the very definition of a budget is problematic, as there is no evident limit on the number of trips during a day.
Fixed budget and fixed prices: time use dataset
The first dataset records time use of 447 individuals across 2,826 days in total.Details about the data collection can be found in Calastri et al. (2020b), and an application to time use analysis using this data can be found in Calastri et al. (2019) and Palma et al. (2021).Only out-ofhome activities are registered in the dataset, which we aggregate to six plus the outside good, as described in table 3. We estimated three different models using the Time Use data.First we estimated a traditional MDCEV model (Bhat, 2008), which has an observed budget and no complementarity.
We also estimated the first model proposed in this paper (eMDC1 ), with an observed budget, complementarity and substitution.Finally, we estimate the second model proposed in this paper (eMDC2 ), with an implicit budget, complementarity and substitution.
In the case of time use, the budget is observed (24 hours a day for everyone), and remains unchanged in forecasting scenarios, giving a clear advantage to the MDCEV and eMDC1 models.
Nevertheless, we are interested in exploring the consistency of results across the models with observed budget, as well as the loss of fit in the eMDC2 model (which uses an implicit budget) with respect to the others.We estimated the models using 70% of the sample, and forecast for the remaining 30%.Table 4 presents the estimated parameters, likelihood and root mean squared error (RMSE) of the forecast consumption at the aggregate sample level for each model.
The parameter estimates point towards consistent effects across models.And while parameters across models change in magnitude, their signs remain unchanged.Parameter interpretation is equivalent across models, except for α.In the MDCEV model α measures satiation across all alternatives.Instead, in the proposed eMDC models α represents the impact of the associated explanatory variable (z 0 ) on the marginal utility of the outside good (ψ 0 ).In the proposed models, α > 0 (α < 0 ) implies a positive (negative) effect of z 0 on ψ 0 , therefore an increased (decreased) consumption of the outside good, and a decreased (increased) consumption of the inside goods when z 0 grows.In this particular application, the negative sign of α female indicates that, after controlling for other variables, women on average perform more out-of-home activities than men.
Concerning the β parameters, all of them are negative because all "inside" activities are less common than the "outside" activity (staying at home, see table 3).These parameters become more negative as the engagement with their corresponding activity decreases, except for leisure and work in eMDC1, probably due to the effect of interactions.As expected, working full time increases the chance to engage in work activities, while the weekend decreases it but increases the chance of engaging in leisure activities; and being 30 years old or younger increases the probability of engaging in school activities.γ parameters follow a similar trend, with higher values associated with activities performed for longer periods of time.The only exception is school, which has a large γ parameters despite being consumed for shorter periods than leisure, probably to compensate for its small ψ school .
Only the eMDC models provide information on complementarity and substitution through their δ parameters, which are fairly consistent across eMDC1 and eMDC2.As expected, there is substitution between work and school, because few people work and study concurrently.On the other hand, we observe complementarity between shopping, private business and leisure, probably because all of these activities are often performed at the city centre, and therefore easier to chain into a single trip.As table 3 shows, correlations between time consumption are negative for all pairs of activities, because of the fixed budget and competing nature of the activities.Yet we do observe that correlations with a magnitude smaller than 0.05 tend to be associated with complementarity effects.In section 6.3, we again compare correlations and complementarity/substitution parameters, but in a dataset where the budget constraint is less strenuous, finding a much stronger connection between them.
Concerning fit, the eMDC1 model achieves the lowest RMSE of the three models, followed by eMDC2 and MDCEV.We expected the eMDC1 achieving the best fit, as it uses all the available information, including the total consumption or budget, and it includes complementarity and substitution effects.On the other hand, it was hard to predict which of the other two models would achieve the second best fit, as the MDCEV model omitts complementarity and substitution, while the eMDC2 model does not use information about the budget.In this particular case, the eMDC2 model fit better than MDCEV, but this is probably a dataset-dependent result, and may change in other study scenarios.The loglikelihood is not comparable across models, as they have different formulations, making the RMSE a better indicator of fit.In summary, when the budget is known, and will be known in future scenarios when forecasting is relevant, then we recommend using the eMDC model with observed budget.
eMDC1-100 and eMDC2 are presented in Table 6.Parameter estimates of eMDC1-80 and eMDC1-120 followed similar trends, and are available from the authors.
α, β and γ parameters follow a similar trend in models eMDC1-100 and eMDC2.Results indicate that having a female or older household head both increase the marginal utility of the outside good (i.e.decrease expenditure in the inside goods), while a more educated household head has the opposite effect.These effects can be explained by the low female participation in the labour market (Contreras and Plaza, 2010), higher levels of education among younger individuals (for Economic Co-operation and Development, 2009), and a strong correlation between level of education and income among the Chilean population (Bilbao, 2013).Among β parameters, we observe that a higher number of adults, children, elders, workers and students per household increase the chance of spending money on alcohol, clothing, health, transport and education, all of which are reasonable effects.Furthermore, the estimates of the γ parameters indicate that more populous households tend to spend more on food, transport, communications, leisure, education and others, but not necessarily on alcohol, clothing, homeware, health, and restaurants, as these categories are more discretionary.
Complementarity and substitution parameters δ are particularly different between the model with observed and implicit budget (eMDC1-100 and eMDC2, respectively).While the model with observed budget captures substitution between multiple pairs of categories, the model without it is dominated by complementarity.This is because when the budget is not controlled for, all categories of consumption seem to increase or decrease in tandem, because a higher (lower) income implies a higher (lower) expenditure across all categories.In other words, the income effect is confounded with complementarity in the model with implicit budget, as discussed in section 4.1.
Our main objective with this dataset was to analyse how errors in the definition of the budget lead to different forecast errors in models with observed budget.To do this, we first estimated the models using 70% of the full sample (training dataset), and then forecast demand on the remaining 30% of observations (validation dataset) multiple times, assuming a different value of the budget in each occasion.We repeated this for each of the eMDC1 models we estimated.
Different budgets lead to different forecasts in the eMDC1 models, but not in eMDC2 model.
Figure 5 presents the results of this exercise.We used the root mean squared error (RMSE) of the aggregate predictions in the validation sample as an indicator of error in the forecast.
As Figure 5 shows, the forecast performance of the model with implicit budget (eMDC2 ) does not change as a function of the budget.Instead, the eMDC1 models achieve a better forecast performance when the forecast budget is close to the estimation budget, but their error grows in a quadratic way with the budget misspecification.It does not seem to be very important how the estimation budget is defined in eMDC1 models.For example, the estimation budget could be defined as the total income of the household or just the total expenditure on the inside goods plus one.However, once a budget has been used during estimation, it is very important to accurately and consistently predict the budget for any forecasting scenario, otherwise the forecast error can These results reveal that in contexts where the forecasting of the budget implies even mild uncertainty, the proposed model with implicit budget can ensure a bounded level of error in the forecast.
Variable budget and variable prices: supermarket scanner dataset
The third application deals with scanner data from a chain of supermarkets (Venkatesan, 2014).
After dropping all records of transactions from households with missing socio-demographic characteristics, and limiting the analysis to only four product categories, the dataset contains 4,002 purchase baskets from 656 households.All the considered product categories are fresh fruits: oranges, peaches, pears, and pineapples.Each fruit can be purchased in packs of different weights, but to simplify the analysis, we calculated the average price per Kg of each product, and expressed the amount purchased in Kg.Table 7 summarises consumption in the dataset.Our objective with this dataset was to compare the model with observed and implicit budget in terms of their sensitivity to changes in price.We estimated two models on the supermarket dataset: eMDC1 is the model with observed budget, which we set to the observed consumption plus one; the second model (eMDC2 ) assumes an implicit budget.The parameter estimates and log-likelihood at convergence of these models are shown in Table 8.Non significant parameters were not removed from the model formulation.To compare their sensitivity to price, we changed the price of oranges between 70% and 130% of their original price, and calculated both models' aggregated forecast demand on the training dataset.Figure 6 plots the demand forecast by each model, for different prices.
As can be seen in Figure 6, both models predict a similar demand for the product whose price changes (oranges), but offer different predictions for the other products, whose prices remain constant.This is because of the income effect only being present in the model with observed budget, pushing for a much more dramatic reassignment of consumption when price changes.
On the other hand, the model with implicit budget assumes a large unobserved budget, inducing smaller reassignment effects caused only by the δ parameters.Assuming a larger budget in eMDC1 would decrease the sensitivity of the forecast demand among the products whose price does not change, making it more similar to the forecast of the eMDC2 model (not reported).Based on the available data we cannot determine which of the two predictions is more accurate, as we are forecasting for unobserved prices.
The complementarity and substitution (δ kl ) parameters are significantly different across models.While eMDC1 captures only complementarity, eMDC2 captures both complementarity and substitution.This is because the δ parameters in eMDC2 are not only capturing the complement- arity and substitution effects, but are also confounded with the income effect.This is apparent as the sign of δ parameters in eMDC2 mirror those of the correlation of demand in the dataset (see table 7).This also explains why the δ parameters in eMDC2 have higher t-ratios, as they are used to capture any interaction between the demand of different products, be it due to complementarity, substitution, or income effects.Larger budgets (as compared to expenditure in inside goods) will reduce the size of income effects, making the model with implicit budget more suitable for such scenarios.Our objective with this dataset is to compare out-of-sample forecast performance between the proposed models with explicit and implicit budget (eMDC1 and eMDC2, respectively) when the definition of the budget is arbitrary.In theory, the budget in our dataset should be the maximum amount of trips a household could generate during a day, but this value is very difficult to determine.Defining the budget as any lower (but more reasonable) value would be an arbitrary decision.A common approach in situations without an evident budget is to use the observed total consumption as the budget (Bhat and Sen, 2006).We follow this approach when estimating eMDC1, assuming the budget to be equal to the observed total number of trips plus one, so that the "outside good" is always consumed.However, this strategy poses a problem when predicting out of sample, as the budget needs to be predicted using an auxiliary model.To reproduce this situation, we estimate our models using only 70% of the whole sample, and predict for the remaning 30%.In the case of eMDC1 we predict the budget using a linear regression on the training data.In the case of eMDC2 we have no need to make assumptions on the budget nor using an auxiliary model for out-of-sample prediction, as the budget is not needed during estimation nor forecasting.
In both eMDC1 and eMDC2 we use a linear function with the same socio-demographics to explain the base utility of the outside good (ψ 0 ).The base utility of the inside good and their satiation is described by a single constant each.The linear regression used to predict the 36 budget has the same socio-demographics as explanatory variables than the discrete-continuous models.Table 10 presents the coefficients of each model estimated with the training dataset (70% of the whole sample), and their forecast performance when predicting on the validation dataset (remaining 30% of the sample).Table 11 presents the complementarity/substitution (δ) parameters of both eMDC1 and eMDC2.Establishing parallels between the parameters of both models is difficult.In the model with observed budget (eMDC1 ) the effect of socio-demographics has two components: their effect on the budget prediction, and their effect on the multiple discrete continuous model itself.On the other hand, the model with implicit budget (eMDC2 ) does not have this complexity.The sign of another that -additionally to these effects-does not require the analyst to define a budget.The inclusion of explicit complementarity and substitution effects enriches the interpretability and realism of the model (Manchanda et al., 1999), while its functional form avoids issues present in previous formulations proposed in the literature (see section 1).The second model, with its implicit budget, is particularly useful when forecasting as it avoids cascading errors due to inaccurate budget predictions (see section 6.2).
The model with implicit budget is based on the hypothesis that total expenditure on the alternatives under consideration is small compared to the overall budget.This hypothesis allows us to approximate the utility of the numeraire good by a linear function, hence removing the necessity to define a budget.This approximation comes at the cost of reduced fit, as compared to the model with observed budget.However, simulations show that the fit of both models converges when the hypothesis above is fulfilled (see section 4.3).Such an assumption is realistic in most daily consumption decisions, but should always be justified when using the model.In general, if the budget can be determined with a great degree of confidence in forecasting scenarios, then we recommend using the model with observed budget.But if there is significant uncertainty in the budget prediction, the model with implicit budget can be a useful alternative, as it makes the prediction error independent from the budget estimation.
A computational implementation of the proposed model is available for R, as an extension of the Apollo package (Hess and Palma, 2019).To download this extension and see examples, visit ApolloChoiceModelling.com.
The models proposed in this paper contribute to the literature on Kuhn-Tucker system demand models to study multiple-discrete choices.There are still several avenues for improvement and further investigation.New functional forms for the complementarity and substitution term in the direct utility function could be explored, with special emphasis on those leading to a compact form of the Jacobian in the likelihood function.More generally, including a random component in the marginal utility of the outside good would be a useful development, especially if it leads to a closed-form likelihood function.Alternative formulations based on indirect utility functions could be less restrictive, as they avoid assumptions on the shape of decision makers' direct utility functions.The model formulation could also be modified to incorporate multiple constraints, for example a monetary and a time budget, or a storage capacity.Of particular interest would be an approach that mixes constraints with an explicit and implicit budget.Finally, an empirical comparison of alternative formulations for the complementarity and substitution component of the utility, as well as the utility of the outside good, is of much interest specially given recent developments in Bhat (2018) and Pellegrini et al. (2021a).
Figures 2 and 3 summarise the true and estimated parameter for the model with observed and implicit budget, respectively.In the graphs, the horizontal axis indicates the true value of the parameter, while the vertical axis indicates the estimated value.In these graphs, a perfect recovery of a parameter is represented by a dot along the identity line (in blue).The graph also contains the 95% confidence interval for each estimated parameter.Both figures offer a similar perspective: while all parameters are recovered correctly, α and β parameters are recovered more precisely, while γ and δ parameters (specially the latter) are harder to recover.
Figure 2 :Figure 3 :
Figure 2: Recovery of parameters for the model with observed budget.
Figure 4 :
Figure 4: Compared fit of models with observed and implicit budget, on data generated assuming a generation process with observed budget Kim et al. (2002) use a similar utility function to the MDCEV model, but assume that the random disturbances follow a multivariate normal distribution.While more flexible, this distribution makes the model much more computationally demanding.Von Haefen andPhaneuf (2005) Pellegrini et al. (2019) refine the model proposed inBhat et al. (2015) by proposing a different interaction term in the utility function.While this new formulation leads to an improved fit and provides a clear interpretation of γ parameters, it retains at least the first issue associated to the formulation ofBhat et al. (2015).Pellegrini et al. (2021a) further expand the NASUF model by allowing for two budget constraints in an application where both time and monetary constraints are considered jointly.
Figure 5 :
Figure5: Comparison of forecast precision of model with implicit and observed budget, when the budget is wrongly specified in the latter.
Figure 6 :
Figure 6: Relative aggregated sample demand forecasted by the traditional and extended MDCEV models for variations in the price of oranges.The black line indicates unity (i.e.original demand).
Table 2 :
Constraints on proposed model parameters for extreme levels of consumptionx k x l:δ kl >0 x l:δ kl <0
Table 3 :
Main descriptive statistics of the time use database * outside good; † when engaged
Table 4 :
Comparison of the proposed extended MDC and a traditional MDCEV models on a time use dataset
Table 6 :
Comparison of model with observed and implicit budget on expenditure dataset
Table 7 :
Main descriptive statistics of the supermarket scanner data
Table 8 :
Parameters estimates of model with observed and implicit budget on the supermarket scanner dataset
Table 9 :
Main descriptive statistics of the number of trips database The last application deals with number of trips generated by a household, split across different purposes: work, study, personal business, leisure and return home.Data comes from the 2012 Origin-Destination survey of Santiago, Chile(Observatorio Social, 2014).The database contains observations for a single day from 10,927 households.Table9summarises the average number of trips per purpose by households' number of vehicles and income.
Table 10 :
Parameter estimates and forecast performance for models on number of trips dataset * Robust t-ratio.† Calculated based on out-of-sample prediction | 15,323 | 2022-07-01T00:00:00.000 | [
"Economics",
"Engineering"
] |
Examining the Transmission of Visible Light through Electrospun Nanofibrous PCL Scaffolds for Corneal Tissue Engineering
The transparency of nanofibrous scaffolds is of highest interest for potential applications like corneal wound dressings in corneal tissue engineering. In this study, we provide a detailed analysis of light transmission through electrospun polycaprolactone (PCL) scaffolds. PCL scaffolds were produced via electrospinning, with fiber diameters in the range from (35 ± 13) nm to (167 ± 35) nm. Light transmission measurements were conducted using UV–vis spectroscopy in the range of visible light and analyzed with respect to the influence of scaffold thickness, fiber diameter, and surrounding medium. Contour plots were compiled for a straightforward access to light transmission values for arbitrary scaffold thicknesses. Depending on the fiber diameter, transmission values between 15% and 75% were observed for scaffold thicknesses of 10 µm. With a decreasing fiber diameter, light transmission could be improved, as well as with matching refractive indices of fiber material and medium. For corneal tissue engineering, scaffolds should be designed as thin as possible and fabricated from polymers with a matching refractive index to that of the human cornea. Concerning fiber diameter, smaller fiber diameters should be favored for maximizing graft transparency. Finally, a novel, semi-empirical formulation of light transmission through nanofibrous scaffolds is presented.
Introduction
In the field of tissue engineering, electrospun scaffolds are commonly used [1]; however, optical properties are in general of minor importance in most applications. In the case of tissue engineering for ophthalmic applications, the transparency of the graft is of highest interest. The cornea is the window of the eye, and its transparency is essential for human beings. Recently, electrospun scaffolds have been discussed for use in ophthalmic applications such as wound dressings after corneal surgery [2][3][4][5][6] or as artificial DMEK (Descemet Membrane Endothelial Keratoplasty) grafts [7,8] for treating patients with corneal endothelial cell pathologies. In both cases, the transparency of the scaffold is of major importance for patients' immediate benefit after surgery. The transparency of a healthy cornea, which is the reference material in this case, is 85-99% in the visible spectrum [9]; hence, a similar transparency is sought for artificial grafts.
For corneal tissue engineering, additionally to xenogeneic tissue like decellularized corneas [10][11][12], different materials and approaches have been investigated, including nanofibers [5,7,13], hydrogels [4,14,15], and composites thereof [16][17][18]. The transparency of the investigated materials was usually determined by light transmission measurements of individual samples with discrete scaffold thicknesses, and no general transmission study was conducted [6,7,13,17]. The comparison of individual scaffolds with different specifications, such as material or fiber diameter, always presents the problem of insufficient accuracy in scaffold thickness. Beside the field of biomaterials, the optical properties 2 of 15 of nanofiber scaffolds were mostly investigated for optoelectronic and energy-related developments to enhance their efficiency [19][20][21].
PCL is a well-studied material in the field of tissue engineering, in particular in the field of corneal tissue engineering [22]. Although known for its opacity, it seems worthwhile to further study PCL due to its remarkable properties, as it is biodegradable and easy to blend with other polymers and has good mechanical strength.
So far, only a few studies were conducted on the transparency of PCL nanofiber scaffolds. For example, Park et al. [23] measured light transmission through electrospun PCL scaffolds using two different fiber diameters and wavelengths. Using an integrating sphere, Park et al. were able to measure directly transmitted and reflected fractions of the incident beam. Their observations indicated that the scattering of the nanofibrous structure is the dominant factor, compared to the light absorption by the material. However, only two discrete wavelengths were investigated, and the influence of a surrounding medium was neglected.
From a physical point of view, the transmission of an electromagnetic wave through a medium can be defined as T = I I 0 (1) and describes the transparency of a material. The incremental decrease in light intensity dI within an infinitesimal distance dx is proportional to the incident beam I dI = −µI dx (2) which can be simply integrated to for I = I 0 at x = 0. The parameter µ, known as the extinction coefficient, describes the absorption and scattering of the electromagnetic wave within the volume and can be written as µ total = µ absorption + µ scattering (4) Additionally, the incident electromagnetic wave can be reflected at the interface between two optical adjacent phases, characterized by their refractive indices n i . If vertical incidence and polarization of the light are of no relevance, the Fresnel equation [24,25] yields the reflectance R, reducing the light transmission T to where n 1 and n 2 are the refractive indices of the surrounding medium and the material, according to Figure 1a. When an electromagnetic wave passes through a volume, reflectance occurs at the n 1 /n 2 as well as at the n 2 /n 1 interfaces. Thus, combining Equations (3)-(5) and neglecting multibeam interference, the overall light transmission through a homogenous volume of thickness d can be written as T = I I 0 = T 2 reflection exp − µ absorption + µ scattering d (6) In the case of a nanofibrous scaffold of thickness D consisting of nanofibers with a fiber diameter d, as displayed in Figure 1b, µ absorption describes light absorption within each fiber, and µ scattering describes light scattering at the individual fibers. The scattering coefficient µ scattering depends on the scattering cross section of the scatterers, i.e., the nanofibers. The scattering cross section for thin fibers was firstly described by Rayleigh in 1881 [26], and a detailed derivation can be found in [27]. The wavelength-dependent total scattering cross section per unit length of a single isolated fiber of random orientation, with its fiber axis in Nanomaterials 2021, 11, 3191 3 of 15 the y-z-plane and an incident beam perpendicular to the fiber axis and therefore normal to the y-z-plane, is given by where r is the fiber radius, λ is the wavelength, and m is the ratio between the refractive indices n 1 (fiber) and n 2 (medium). Derived from the dielectric needle approximation, Equation (7) has been used extensively to describe the natural transparency of the mammalian cornea [28][29][30][31][32]. For a porous scaffold, which is the case for electrospun scaffolds, reduction in light transmission occurs for every interaction with individual fibers. The total light transmission through a nanofibrous scaffold should therefore be describable through the scaffold's thickness, the diameter of the nanofibers, and the refractive indices of the fiber material and the surrounding medium.
The scattering cross section for thin fibers was firstly described by Rayleigh in 1881 [26] and a detailed derivation can be found in [27]. The wavelength-dependent total scattering cross section per unit length of a single isolated fiber of random orientation, with its fibe axis in the y-z-plane and an incident beam perpendicular to the fiber axis and therefore normal to the y-z-plane, is given by σ scattering (λ) = n 1 3 π 3 (πr 2 ) 2 (m 2 -1) 2 where r is the fiber radius, λ is the wavelength, and m is the ratio between the refractive indices n1 (fiber) and n2 (medium). Derived from the dielectric needle approximation Equation (7) has been used extensively to describe the natural transparency of the mam malian cornea [28][29][30][31][32]. For a porous scaffold, which is the case for electrospun scaffolds reduction in light transmission occurs for every interaction with individual fibers. The total light transmission through a nanofibrous scaffold should therefore be describable through the scaffold's thickness, the diameter of the nanofibers, and the refractive indice of the fiber material and the surrounding medium.
(a) (b) Figure 1. Schematic of an incident beam with intensity I0 in the x direction passing through (a) a homogenous volume of thickness d with the optical interfaces at the n2/n1 and n1/n2 transitions and (b) a planar scaffold in the y-z-plane of thickness D, consisting of single nanofibers with fiber diameter d. Propagation of the incident beam in the x direction.
For an application-oriented field of research, such as corneal tissue engineering, a general equation, describing the influencing parameters of nanofibrous scaffold transpar ency, is substantial. Therefore, in this study, electrospun PCL nanofibrous scaffolds with different fiber diameters were investigated regarding their optical properties. Using UVvis spectroscopy measurements, light transmission through the scaffolds was analyzed with regard to scaffold thickness, fiber diameter, and surrounding medium. Using statis tical modelling, power laws were derived for an appropriate description of the data within the experimental error. Finally, design principles were formulated from the experimenta findings to promote further research in the field of corneal tissue engineering.
Materials and Methods
Polycaprolactone (PCL) nanofiber scaffolds were produced via electrospinning. The method is well described in the literature (e.g., [33]; a theoretical description can be found in [34]). In brief, a polymer melt or polymer solution is extruded through a needle. Th polymer solution is stretched due to the electrical forces in the electric field, which is se between the needle and a grounded collector. By varying the polymer concentration, dif ferent fiber diameters can be fabricated. The spinning solution was prepared from PCL For an application-oriented field of research, such as corneal tissue engineering, a general equation, describing the influencing parameters of nanofibrous scaffold transparency, is substantial. Therefore, in this study, electrospun PCL nanofibrous scaffolds with different fiber diameters were investigated regarding their optical properties. Using UV-vis spectroscopy measurements, light transmission through the scaffolds was analyzed with regard to scaffold thickness, fiber diameter, and surrounding medium. Using statistical modelling, power laws were derived for an appropriate description of the data within the experimental error. Finally, design principles were formulated from the experimental findings to promote further research in the field of corneal tissue engineering.
Materials and Methods
Polycaprolactone (PCL) nanofiber scaffolds were produced via electrospinning. The method is well described in the literature (e.g., [33]; a theoretical description can be found in [34]). In brief, a polymer melt or polymer solution is extruded through a needle. The polymer solution is stretched due to the electrical forces in the electric field, which is set between the needle and a grounded collector. By varying the polymer concentration, different fiber diameters can be fabricated. The spinning solution was prepared from PCL (M W = 80,000 g mol −1 , Sigma Aldrich, Saint Louis, MO, USA) dissolved in a 7:3 mixture of formic acid and acetic acid (both Carl Roth GmbH + Co. KG, Karlsruhe, Germany). Fiber diameter was evaluated using SEM images (CrossBeam Carl Zeiss Microscopy GmbH, Oberkochen, Germany) and ImageJ software. In preliminary experiments, for each solution, a working window was identified, focusing on a homogenous fiber morphology and sufficient fiber yield. The electrospinning parameters as well as the resulting mean fiber diameters ± standard deviation are given in Table 1. Table 1. Parameters for the electrospinning of PCL scaffolds from spinning solutions with varying concentrations from 5 g/100 mL to 16 g/100 mL and resulting fiber diameters. With increasing spinning time, the scaffold thickness could be adjusted. Due to the similar flow rates, increasing spinning concentrations and thus fiber diameter led to a reduced spinning time for the desired scaffold thicknesses. Scaffolds were fabricated with a desired thickness from 1 µm to 50 µm. Within this range, application-oriented conclusions towards predicting light transmission through the nanofibrous scaffolds could be drawn.
Concentration
Scaffolds were fixed in tissue carrier rings (9 mm inner diameter, Minucells and Minutissue, Bad Abbach, Germany), and the thickness of each scaffold was measured using a digital contact sensor (GT series, Keyence, Itasca, IL, USA). Therefore, the scaffolds, fixed in the tissue carrier rings, were sandwiched between a cylindrical base (8 mm in diameter) and a circular glass platelet (4.5 mm in diameter). Subsequently, the net thickness was measured over a scaffold area of approximately 16 mm 2 . For the measurement of light transmission through the scaffolds, a UV-vis spectrometer (Specord 210 plus, Analytik Jena GmbH, Jena, Germany) was used. Therefore, the scaffolds were placed in a cuvette of personal proprietary (e.g., Figure 2), ensuring that the scaffolds were kept in place perpendicular to the incident, monochromatic beam. The cuvette was filled with either ethanol (EtOH) (Carl Roth GmbH + Co. KG, Karlsruhe, Germany) or phosphate-buffered saline (PBS) (VWR International GmbH, Darmstadt, Germany) to investigate the influence of different surrounding media. Light transmission measurements were conducted from 380 nm to 780 nm with an increment of one nanometer. Prior to every measurement, a calibration scan was performed to normalize the measured intensity to the experimental set-up, consequently I 0 (λ) = 100%. For each scaffold type, at least 50 scaffolds were measured, resulting in over 250,000 individual wavelength-transmission data points.
From the individual wavelength-transmission data, discrete thickness-transmission data for defined wavelengths were plotted, as shown in Figure 3. Starting from 380 nm, with an increment of 10 nm, fit lines were plotted using an exponential decay function (8) where T background accounts for the diffuse light transmission of thick scaffolds >50 µm, where the measured light transmission is usually in the range of a few percent. Utilizing Equation (8), an optimal description of the data in the thickness range of interest was reached. Data fitting was performed as a two-stage process using Origin 2019 (OriginLab Corporation, Northampton, MA, USA). After the first fitting, data points with an individual residuum higher than 1.5 times the externally studentized residuum of the fit function were removed from the dataset, and fitting was then repeated with the processed dataset. Usually, outliers originated from false thickness measurements, due to the thickness measurement in contact mode or to an inhomogeneous thickness distribution of the scaffolds. Finally, Nanomaterials 2021, 11, 3191 5 of 15 fit lines were combined in a contour plot, with the wavelength on the x-axis, the scaffold thickness on the y-axis, and the light transmission as colored grading. Between the fit lines, a linear interpolation was presumed. With this approach, errors in the determination of the scaffold thickness or light transmission could be eliminated by averaging a large amount of data. From the contour plot, contour lines of arbitrary thickness can be extracted for an exact comparison of different experimental groups. To give an estimation of the experimental error, contour lines are presented with error bars indicating the 95% confidence interval of the discrete fit lines. were removed from the dataset, and fitting was then repeated with the processed dataset. Usually, outliers originated from false thickness measurements, due to the thickness measurement in contact mode or to an inhomogeneous thickness distribution of the scaffolds. Finally, fit lines were combined in a contour plot, with the wavelength on the x-axis, the scaffold thickness on the y-axis, and the light transmission as colored grading. Between the fit lines, a linear interpolation was presumed. With this approach, errors in the determination of the scaffold thickness or light transmission could be eliminated by averaging a large amount of data. From the contour plot, contour lines of arbitrary thickness can be extracted for an exact comparison of different experimental groups. To give an estimation of the experimental error, contour lines are presented with error bars indicating the 95% confidence interval of the discrete fit lines. . From the individual transmission values, transmission-versus-scaffold thickness plots were generated for discrete wavelengths. Using the fit lines, contour plots were generated for individual scaffolds and enclosing media. Using the contour lines, a comparison between different scaffolds and environmental parameters at various scaffold thicknesses could be made.
Further evaluations of the experimental data were conducted at a wavelength of 589 nm due to the availability of refractive indices, as the D-line of the sodium spectrum is usually used for determining the optical properties of materials. The absorption coefficient of PCL was presumed to be 0.0001 µm −1 , and the considered refractive indices of the used materials were 1.36 for ethanol, 1.33 for PBS, and 1.46 for PCL [23,35,36]. were removed from the dataset, and fitting was then repeated with the processed dataset. Usually, outliers originated from false thickness measurements, due to the thickness measurement in contact mode or to an inhomogeneous thickness distribution of the scaffolds. Finally, fit lines were combined in a contour plot, with the wavelength on the x-axis, the scaffold thickness on the y-axis, and the light transmission as colored grading. Between the fit lines, a linear interpolation was presumed. With this approach, errors in the determination of the scaffold thickness or light transmission could be eliminated by averaging a large amount of data. From the contour plot, contour lines of arbitrary thickness can be extracted for an exact comparison of different experimental groups. To give an estimation of the experimental error, contour lines are presented with error bars indicating the 95% confidence interval of the discrete fit lines. . From the individual transmission values, transmission-versus-scaffold thickness plots were generated for discrete wavelengths. Using the fit lines, contour plots were generated for individual scaffolds and enclosing media. Using the contour lines, a comparison between different scaffolds and environmental parameters at various scaffold thicknesses could be made.
Further evaluations of the experimental data were conducted at a wavelength of 589 nm due to the availability of refractive indices, as the D-line of the sodium spectrum is usually used for determining the optical properties of materials. The absorption coefficient of PCL was presumed to be 0.0001 µm −1 , and the considered refractive indices of the used materials were 1.36 for ethanol, 1.33 for PBS, and 1.46 for PCL [23,35,36]. Further evaluations of the experimental data were conducted at a wavelength of 589 nm due to the availability of refractive indices, as the D-line of the sodium spectrum is usually used for determining the optical properties of materials. The absorption coefficient of PCL was presumed to be 0.0001 µm −1 , and the considered refractive indices of the used materials were 1.36 for ethanol, 1.33 for PBS, and 1.46 for PCL [23,35,36].
For a simplified and easy-to-use formulation of transmission through nanofibrous scaffolds at distinctive wavelengths, a semi-empirical approach using a regression analysis was adopted using Statistica 10 (StatSoft Inc., Tulsa, OK, USA). In total, the modeling of approximately 250,000 individual experimental data points was performed, and a semi-empirical model, depending on the scaffold properties and surrounding medium, was formulated.
Results and Discussion
The individual transmission measurements of scaffolds with arbitrary thickness were the basis of the following results. Evaluating the transmission as a function of scaffold thickness for discrete wavelengths opened the possibility to analyze light transmission through electrospun scaffolds and compare devised scaffolds of arbitrary thickness with regard to their transparency. Figure 4 shows schematically the fit functions of all six sample groups for a discrete wavelength of 589 nm. Obviously, as shown in Equation (3), light transmission decreased exponentially with increasing scaffold thickness. Sufficiently high transmission values were only obtained below 5 µm, whereby scaffolds with thinner fiber diameters showed a higher light transmission in general. As displayed, the parameter m from Equation (8) increased with increasing fiber diameter from 0.054 to 0.089, and thus light attenuation. The highest light transmission could therefore be attributed to scaffolds consisting of fibers with a diameter of 35 nm (Figure 4, top left). The parameters of the scaffolds with a fiber diameter of 103 nm and 136 nm slightly diverged from the overall trend. This could be due to insufficient data points in the relevant thickness range, resulting in poor data fitting. Moreover, the broad fiber diameter distribution accounted for insignificant distinguishable median values of the fiber diameters for the samples with a fiber diameter of 103 nm to 136 nm. Nevertheless, it is clear from Figure 4 that with increasing fiber diameter, the coefficient m increased, and transmission of visible light through the scaffolds decreased.
Individual Transmission Measurements and Resulting Contour Plots
The empirical description of light transmission, as shown in Figure 4, was evaluated for discrete wavelengths from 380 nm to 780 nm with an increment of 10 nm. From the combination of fit lines, contour plots were generated, as displayed in Figure 5. The fit lines became vertical lines in Figure 5, with transmission as color grading from red (0% light transmission) to green (100% light transmission). Light transmission >85% characterized a scaffold transparency comparable to that of the human cornea [9]. Again, it became clear that light transmission values above 85% were only accessible for thin scaffolds. With increasing scaffold thickness, light transmission was reduced to values insufficient for all types of scaffolds. The concept presented in this study, using discrete wavelengths and resulting contour plots, as shown in Figures 4 and 5, may serve as a tool to decide on the maximum scaffold thickness for a desired light transmission or vice versa. This represents a novel approach for characterization of scaffolds for corneal tissue engineering. Formerly, for a meaningful comparison, scaffolds of similar thickness had to be produced. Now, for the first time, light transmission through nanofibrous scaffolds can be compared, not only for existing scaffolds but also for scaffolds of arbitrary thickness. Based on the plots in Figure 5, further evaluations of the influence of fiber diameter and enclosing medium on light transmission through electrospun scaffolds were performed.
Influence of Fiber Diameter and Surrounding Medium
As shown in the previous section, light transmission depends on the scaffold properties. Beside scaffold thickness, fiber diameter is the structuring element. With decreasing fiber diameter, the structure of the scaffolds changed, as the number of fibers per unit volume increases. In Figure 6a, exemplary 10 µm scaffolds from the contour plots of Figure 5 are displayed. It is shown that the overall light transmission increased with decreasing fiber diameter. For a better clarity, scaffolds with 103 nm and 136 nm fiber diameter were left out as, due to the broad fiber diameter distribution of electrospun nanofibers, light transmission values were not significantly different for the scaffolds from 103 nm, 113 nm and 136 nm, as already mentioned before. The highest light transmission was observed for scaffolds consisting of fibers with a diameter of 35 nm. Transmission values up to 66% (at 589 nm) were measured. With increasing fiber diameter, light transmission decreased to 43% (at 589 nm). For all scaffolds, a wavelength-dependent light transmission was observed. This could occur from the decreasing ratio of fiber diameter to wavelength with increasing wavelength. Similar to pure Rayleigh scattering, where the scattered intensity is proportional to λ −4 , or the thin needle approximation as shown in Equation (7), the influence of scattering is reduced for increasing wavelengths [27].
Influence of Fiber Diameter and Surrounding Medium
As shown in the previous section, light transmission depends on the scaffold properties. Beside scaffold thickness, fiber diameter is the structuring element. With decreasing fiber diameter, the structure of the scaffolds changed, as the number of fibers per unit volume increases. In Figure 6a, exemplary 10 µm scaffolds from the contour plots of Figure 5 are displayed. It is shown that the overall light transmission increased with decreasing fiber diameter. For a better clarity, scaffolds with 103 nm and 136 nm fiber diameter were left out as, due to the broad fiber diameter distribution of electrospun nanofibers, light transmission values were not significantly different for the scaffolds from 103 nm, 113 nm and 136 nm, as already mentioned before. The highest light transmission was observed for scaffolds consisting of fibers with a diameter of 35 nm. Transmission values up to 66% (at 589 nm) were measured. With increasing fiber diameter, light transmission decreased to 43% (at 589 nm). For all scaffolds, a wavelength-dependent light transmission was observed. This could occur from the decreasing ratio of fiber diameter to wavelength with increasing wavelength. Similar to pure Rayleigh scattering, where the scattered intensity is proportional to λ −4 , or the thin needle approximation as shown in Equation (7), Electrospun scaffolds usually show a whitish appearance. The big difference in refractive indices between air and polymer leads to strong isotropic reflections and scattering of all wavelengths; hence, the scaffolds appear white. With a decreasing difference in refractive index, reflectance as well as scattering could be minimized, and scaffold transparency improved. In Figure 6b, the transmission data for two different scaffold types is shown. Just like in Figure 6a, light transmission data were taken from the contour plots as horizontal line for a scaffold thickness of 10 µm for scaffolds with 35 nm and 167 nm fiber diameter. Again, light transmission was enhanced with a reduced fiber diameter. Changing the surrounding medium from EtOH to PBS led to a reduced light transmission by 5 to 10 percentage points. The difference in refractive index increased from 0.1 (PCL/EtOH) to 0.13 (PCL/PBS) resulting in an increased light attenuation. Electrospun scaffolds usually show a whitish appearance. The big difference in refractive indices between air and polymer leads to strong isotropic reflections and scattering of all wavelengths; hence, the scaffolds appear white. With a decreasing difference in refractive index, reflectance as well as scattering could be minimized, and scaffold transparency improved. In Figure 6b, the transmission data for two different scaffold types is shown. Just like in Figure 6a, light transmission data were taken from the contour plots as horizontal line for a scaffold thickness of 10 µm for scaffolds with 35 nm and 167 nm fiber diameter. Again, light transmission was enhanced with a reduced fiber diameter. Changing the surrounding medium from EtOH to PBS led to a reduced light transmission by 5 to 10 percentage points. The difference in refractive index increased from 0.1 (PCL/EtOH) to 0.13 (PCL/PBS) resulting in an increased light attenuation. Additionally to UV-vis measurements, differences in light transmission can be observed with optical imaging. Therefore, scaffolds with a thickness close to 10 µm were moistened in PBS and placed onto a reference. The resulting images are displayed in Figure 7. The transparency of the scaffolds, as already indicated in Figure 6, could be classified as insufficient for corneal grafts, though, as shown in Figure 7, the transparency of the scaffold with a mean fiber diameter of 35 nm (B) was closer to that of the reference (A) than the transparency of the scaffold with a fiber diameter of 167 nm (C).
(a) (b) Figure 6. Examples of extracted contour lines from Figure 5. Transmission values were taken for scaffolds with a thickness of 10 µm. Light transmission increases with a decreasing fiber diameter (a) as well as with a decreasing ratio of the refractive indices (b).
Additionally to UV-vis measurements, differences in light transmission can be observed with optical imaging. Therefore, scaffolds with a thickness close to 10 µm were moistened in PBS and placed onto a reference. The resulting images are displayed in Figure 7. The transparency of the scaffolds, as already indicated in Figure 6, could be classified as insufficient for corneal grafts, though, as shown in Figure 7, the transparency of the scaffold with a mean fiber diameter of 35 nm (B) was closer to that of the reference (A) than the transparency of the scaffold with a fiber diameter of 167 nm (C). Summarizing the above, it can be concluded that reducing the fiber diameter and matching the refractive indices yield improved light transmission through nanofibrous scaffolds.
Semi-Empirical Description of Light Tranmission
Following the theoretical considerations in the Materials and Methods section, light transmission through the nanofibrous scaffolds depends on scaffold properties such as fiber diameter and scaffold thickness and on material characteristics such as the refractive index. Thus, a semi-empirical description of the experimental transmission data was de- Summarizing the above, it can be concluded that reducing the fiber diameter and matching the refractive indices yield improved light transmission through nanofibrous scaffolds.
Semi-Empirical Description of Light Tranmission
Following the theoretical considerations in the Materials and Methods section, light transmission through the nanofibrous scaffolds depends on scaffold properties such as fiber diameter and scaffold thickness and on material characteristics such as the refractive index. Thus, a semi-empirical description of the experimental transmission data was derived to describe light transmission through nanofibrous scaffolds using regression analysis. With the formulation of scaling laws, precise predictions of the influence of eligible parameters can be made within the experimental accessed range. Neglecting wavelength-dependent variances in the refractive indices and in consistency with the Lambert-Beer law (Equation (3)), the following approach was chosen ln −lnT D = α 0 +α 1 lnR + α 2 lnd + α 3 ln λ where R stands for reflectance from equation 5 at a wavelength of 589 nm. The resulting α-values were α 0 = 1.48, α 1 = 0.55, α 2 = 0.60, and α 3 = 1.68 (d, D and λ in µm). A further simplification based on the consideration of physical reasonable dimensions, led to the improved model with only three adjustable parameters. Now, the resulting α-values were α 0 = 1.41, α 1 = 0.55 and α 2 = 0.57. Taking into considerations the experimental error due to variances in fiber diameter as well as scaffold thickness, α 1 and α 2 were set as α 1,2 = 0.5. Subsequently, the model from Equation (10) could be written as In this semi-empirical model, α is a dimensionless parameter and was set to α = 2.75. Consequently, the formulation presented in Equation (11) could be written as and allowed the prediction of light transmission through nanofibrous scaffolds within typical experimental errors. Accounting for the differences in refractive indices, R was derived from the Fresnel equations for vertical incidence, neglecting multibeam interference [24,25]. The predicted transmission data versus the observed transmission data for all six samples groups, measured in two different media within the range of 380 nm to 780 nm, are shown in Figure 8. The data are described with R 2 = 0.91, suggesting an acceptable accuracy of the model within the experimental data. An estimation of the experimental error was performed utilizing the relative error T relative , considering that the dominant experimental uncertainty is attributed to the scattering thickness D. (12). For reasons of clarity, only every 50th data point is s The red area corresponds to the error range based on Equations (13) and (21). Transparenc healthy cornea is indicated at T = 85%.
Formulation of the Design Principles
Tissue engineering in the context of ophthalmology mostly deals with the full mellar replacement of the cornea. The main part of the cornea, the stroma, consi highly aligned collagen fibrils [9], which act as scatterers, besides other parts of the st Figure 8. Predicted versus observed transmission of all individual data points. Predicted transmission was calculated using Equation (12). For reasons of clarity, only every 50th data point is shown. The red area corresponds to the error range based on Equations (13) and (21). Transparency of a healthy cornea is indicated at T = 85%.
T relative equals approximately −µ ∆D, giving easy access to the expectable accuracy of the predicted transmission data, as µ is defined as µ(n 1 , n 2 , λ, d). In order to estimate the error of the scaffold thickness, the following simple approach was adopted: assuming that a scaffold with total thickness D can be separated in N layers of thickness D i , the total thickness can be written as yielding the error of the total thickness, utilizing error propagation On the other hand, D = D i N (16) while all sublayers with thickness D i can be assumed to have the same thickness D e D i = D e (17) and therefore the same error From Equation (15), it follows and utilizing Equation (16), the error can now be estimated with where k is an adjustable parameter. Considering typical values for the scaffold thickness, k yields values of approximately 1 µm. Finally, the experimental error in the measurement of the scaffold thickness can be estimated with With the semi-empirical formulation of light transmission through nanofibrous scaffolds, a novel concept is presented for the design of nanofibrous scaffolds, focusing on the optical properties.
Formulation of the Design Principles
Tissue engineering in the context of ophthalmology mostly deals with the full or lamellar replacement of the cornea. The main part of the cornea, the stroma, consists of highly aligned collagen fibrils [9], which act as scatterers, besides other parts of the stroma like the keratocytes. The collagen fibrils have a diameter around 25 nm and are thus even smaller in diameter than the smallest fibers in this study. Considering them as a blueprint, mimicking the corneal structure would mean the following:
•
Reducing the fiber diameter d; • Reducing the scaffold thickness D; • Selecting a material with a refractive index similar to that of the human cornea Meanwhile, the first two points refer to structural properties, while the latter one is purely based on the chosen materials, whereby fiber diameter and matching refractive indices are closely connected, as depicted in Figure 6. Especially with decreasing fiber diameter, light transmission is mainly influenced by the scattering cross section, which again strongly depends on the refractive index of the used material. The ideal material would therefore have a refractive index as close as possible to the refractive index of the human cornea with n cornea = 1.376 [9].
As most of the commonly used polymers for tissue engineering possess a refractive index of approximately 1.50, it becomes evident that for corneal tissue engineering, the use of pure polymers will result in insufficient light transmission. Therefore, we suggest blending these polymers with foremost hygroscopic polymers such as peptides or polysaccharides or even using hygroscopic polymers themself as fibers for the scaffold. The key to an improved light transmission lies in the incorporation of water (n = 1.33) in the polymeric fiber matrix. With a sufficient amount of water uptake, the resulting refractive index of the blend fibers can be approximated using the Gladstone-Dale equation [37], which holds for ∆n i < 0.2. Thus, an n total can be calculated as n total = ∑ n i v i (22) where n i represents the refractive indices, and v i the volume fractions of the individual components. For the sum of the volume fractions applies ∑v i = 1. With the estimation of a hypothetic refractive index for varying blend compositions, the transmission can be predicted. With this approach, a preselection of suitable polymers and polymer blends can be achieved, and basic design principles can be formulated. In Figure 9, a hypothetical example using this approach is shown. The blending of polymer A with polymer B with refractive indices of n A = 1.45 and n B = 1.55 with different ratios requires a defined amount of water uptake to minimize the difference in refractive indeces between the ternary blend and the natural cornea. As a visual result, the ternary contour plot of ∆n is shown in Figure 9a. From this on, using Equation (11), the transmission could be calculated. In Figure 9b, the transmission for a scaffold in equilibrium swelling state with 10 µm thickness, 100 nm fiber diameter, and an experimental wavelength of 589 nm is shown. . Example for a mixture of two polymers and different water uptake after swelling. From the refractive indices of the single materials, the overall refractive index as well as the difference with respect to the refractive index of the cornea can be calculated (a). Using Equation (11), the resulting light transmission through such hypothetic scaffolds can be estimated (b). Scaffold was defined to be 10 µm thick, consisting of fibers with a fiber diameter, after swelling, of 100 nm. Transmission is shown at 589 nm. Asterisk (*) indicates corneal transparency corresponding to Tcornea > 85%. Table 2 provides a brief overview of various eligible blend polymers. For most polymers, swelling, and thus water uptake, are highly dependent on the degree of crosslinking, the crosslinking agent, and the molar mass. It must be pointed out that hygroscopic polymers require chemical or physical crosslinking, otherwise the fibers would lose their mechanical strength due to the water uptake or will be dissolved in the worst case. In the case of polymer blends, water uptake is related to the blend polymer and relative amounts of the matrix and blend polymer. The approach presented in this study can be used in all areas of biomaterials like bio-printing or tissue engineering, where transparency of the graft is of interest. 1 Depending on crosslinking, crosslinking agent, and/or blend polymer and content.
Conclusions
The transmission of light in the visible spectrum from 380 nm to 780 nm is an im- Figure 9. Example for a mixture of two polymers and different water uptake after swelling. From the refractive indices of the single materials, the overall refractive index as well as the difference with respect to the refractive index of the cornea can be calculated (a). Using Equation (11), the resulting light transmission through such hypothetic scaffolds can be estimated (b). Scaffold was defined to be 10 µm thick, consisting of fibers with a fiber diameter, after swelling, of 100 nm. Transmission is shown at 589 nm. Asterisk (*) indicates corneal transparency corresponding to T cornea > 85%.
The resulting proportion of polymer A, B, and water refers to the steady state, where the swelling reached its equilibrium value. While the ratio of polymer A to B can be adjusted as desired, water uptake is mainly dependent on the hygroscopic behavior of the blend. In the case of Figure 9, a very low amount of the components A and B in the range from 0.2 to 0.5 and 0 to 0.3 should be used, while a high water uptake is required, leading to a final water content of 0.7 to 0.8 (70-80%). Such blends will show high light transmission values over 85%, qualifying for corneal grafts. Table 2 provides a brief overview of various eligible blend polymers. For most polymers, swelling, and thus water uptake, are highly dependent on the degree of crosslinking, the crosslinking agent, and the molar mass. It must be pointed out that hygroscopic polymers require chemical or physical crosslinking, otherwise the fibers would lose their mechanical strength due to the water uptake or will be dissolved in the worst case. In the case of polymer blends, water uptake is related to the blend polymer and relative amounts of the matrix and blend polymer. The approach presented in this study can be used in all areas of biomaterials like bio-printing or tissue engineering, where transparency of the graft is of interest.
Conclusions
The transmission of light in the visible spectrum from 380 nm to 780 nm is an important characteristic of future transplants in corneal tissue engineering. Should the patient experience a direct improvement after surgery, transparent grafts must be produced. With the emerging interest in electrospun scaffolds for corneal tissue engineering, transparency has to be equally important to biocompatibility and mechanical strength. In the literature, graft transparency is only examined as a side aspect of graft evaluation, and in most publications only exemplary grafts are shown. In this study, a detailed analysis of light transmission through nanofibrous PCL scaffolds was performed. By varying fiber diameter and surrounding medium, material and structural properties could be separated. For enhanced transparency of nanofibrous scaffolds, thin fibers and matching refractive indices should be used. Moreover, a novel, simple model is provided to describe the light transmission of nanofibrous scaffolds and its experimental validation by a huge amount of data. Finally, from the general conclusions, design principles were formulated to promote further research in the field of corneal tissue engineering | 9,199.4 | 2021-11-25T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
A Virtue Reliabilist Error-Theory of Defeat
Knowledge defeat occurs when a subject knows that p, gains a defeater for her belief, and thereby loses her knowledge without necessarily losing her belief. It’s far from obvious that externalists can accommodate putative cases of knowledge defeat since a belief that satisfies the externalist conditions for knowledge can satisfy those conditions even if the subject later gains a defeater for her belief. I’ll argue that virtue reliabilists can accommodate defeat intuitions via a new kind of error theory. I argue that in cases where the subject holds dogmatically onto her belief in the face of an apparent defeater, her belief never qualified as knowledge, since the belief was not gained via an exercise of her epistemic virtues. In cases where the subject suspends her judgment upon receiving the putative defeater her original belief might have qualified as knowledge, but crucially, in such cases knowledge is lost due to loss of belief, rather than due to the epistemic force of the defeater. Therefore, knowledge defeat isn’t a genuine phenomenon even though there are no cases where a subject knows what she originally believed after receiving the putative defeater.
Introduction
Knowledge defeat is said to occur when a subject knows that p, then gains a putative defeater for her belief, and thereby loses her knowledge that p without necessarily losing her belief that p or any relevant evidence. Accommodating the phenomenon of knowledge defeat isn't easy for externalist theories of knowledge. 1 Indeed, some evidence, then their beliefs might become unsafe, since they might have formed a different belief, which would have been false. But whether Yen and Ciri rebase their beliefs is a contingent matter, and therefore their knowledge need not be defeated by the putative defeater, contra the defeatist intuition (Lasonen-Aarnio, 2010). 7 Therefore, it isn't easy to see how externalist theories of knowledge could accommodate putative cases of knowledge defeat. An option that used to be popular was to add a no-defeaters clause to the externalist theory of knowledge. 8 Those who have been unwilling to add a seemingly ad hoc no-defeaters clause to their accounts of justification or knowledge have aimed to accommodate intuitions of knowledge defeat via error-theories. Lasonen-Aarnio (2010 has argued that knowledge can sometimes be retained in putative cases of knowledge defeat. Our negative assessment of subjects who retain their knowledge in such cases is explained by the fact that they are manifesting bad dispositions, dispositions that would in general be manifested in cases of ignorance, rather than in cases of knowledge. Baker-Hytch and Benton (2015, p. 57) have argued that if knowledge is the norm of belief, then the apparent irrationality of subjects who retain their beliefs in face of misleading evidence can be explained by the fact that such subjects violate a guidance norm that is generated by the knowledge norm of belief. 9 While I am very sympathetic to both Lasonen-Aarnio's proposal and to Baker-Hytch and Benton's account, I wish to sketch a new kind of error-theory that falls directly out of virtue reliabilism. The error-theory I propose differs significantly from the earlier ones. According to it, in cases where the subject dogmatically clings onto her belief she never knew to begin with, or did not acquire a putative defeater in the first place, while in cases where the subject suspends judgment she might have known, but doesn't any more since she lacks the relevant belief. It's an error theory for two reasons. Firstly, in some cases of putative knowledge defeat we mistakenly think that the subject had knowledge to begin with. Secondly, according to the view putative defeaters cannot on their own defeat knowledge. The defeatist intuition is explained by the fact that there are no cases where a subject knows that p at t 1 , and retains her knowledge of p after having received a putative defeater for p at t 2 .
Some readers might think that the error-theory provided is too radical. These readers are invited to see this paper as offering an argument against virtue reliabilism, since the error-theory I present falls directly out of the main tenets of virtue reliabilism. My sole aim here is to examine what consequences virtue reliabilism has for defeat. 7 Note also that if we were to think that what one knows is always part of one's evidence, then if Yen and Ciri don't lose any evidence upon receiving the misleading defeater, their evidence at t 2 will still conclusively support their original beliefs because knowledge is factive. 8 Goldman has aimed to deal with putative defeat cases by adding a no-defeaters clause to his theory of justification, and hence to his theory of knowledge. For different ways in which a no-defeaters clause can be added to process reliabilist theories see Goldman (1979;1986, pp. 111-112) and Lyons (2009Lyons ( , 2016. For critique of these proposals, see Beddor (2015). 9 See Brown (2018, p. ch. 5) for discussion of these strategies to explain away knowledge defeat.
In what follows, I'll focus on cases like Red light and Feint that involve so-called doxastic or mental-state defeaters. All doxastic defeaters are beliefs. For instance, in 'Red light' the defeater that Yen has is her belief that [the wall is illuminated by red light that would have made the wall look red no matter its actual colour]. I'll set aside cases that feature propositional or normative defeaters (Lackey, 1999). A propositional defeater for the belief that p is a true proposition such that if S were to believe it, then S wouldn't know that p. A normative defeater is a propositional defeater that the subject should have believed. There are two reasons why I limit the scope of inquiry to the potential epistemic force of doxastic defeaters. Firstly, cases featuring doxastic defeaters are the most plausible cases of knowledge defeat. If it turned out that doxastic defeaters are void of epistemic power, as I hope to show, then there is reason to think that propositional and normative defeaters are void of epistemic power too. Secondly, I think that cases of propositional and normative defeat are highly contentious. In my mind it's better not to use such cases when evaluating a theory. Henceforth all talk of defeaters refers to doxastic defeaters.
This essay is structured as follows. In the next section I lay out some key ideas of virtue reliabilism. In the third section I spell out under what conditions a belief can function as a defeater. In the fourth section I examine whether knowledge defeat is a genuine phenomenon, under the assumption that knowledge is always the product of one's cognitive abilities. In the fifth section I briefly compare my account to other virtue theoretic solutions.
Virtue and Coherence of Character
The central thesis of virtue reliabilism is that knowledge requires that one's cognitive success must be attributable to one's cognitive character. Some virtue epistemologists see this central thesis as giving both necessary and sufficient conditions for knowledge. 10 Others think that it provides only a necessary condition for knowledge. 11 The central thesis can be interpreted in various ways. A cognitive success can be understood either as the acquisition or maintaining of a true belief, or as the acquisition or maintaining of knowledge. The former views belong to the classical tradition of analyzing knowledge in terms of true belief plus some other conditions. 12 The latter views belong to the knowledge first movement, championed by Williamson (2000). 13 Another aspect in which the central thesis is ambiguous is on the question when a cognitive success is attributable to the subject's cognitive abilities. According to Greco (2010), the truth of a subject's belief is attributable to her cognitive character just in case the fact that she believes out of cognitive character is part of the most salient causal explanation why she acquired a true, rather than a false belief. In other words, one's cognitive character has to be an important part of the best causal explanation for one's cognitive success in order for one to know. According to Sosa (2007Sosa ( , 2009), one's cognitive success is attributable to one's cognitive character just in case one's cognitive success is a manifestation of the cognitive abilities that make up one's cognitive character. Thus Sosa's seeks to understand the attribution relation in terms of a more general metaphysical relation, namely, as the manifestation of a disposition. Many have preferred Sosa's account to Greco's, probably because Sosa is able to side-step some counterexamples that Greco's account seems susceptible to. 14 Here we need not be concerned with these issues. The argument I offer doesn't depend on how one understands cognitive success, nor on how we flesh out the attribution clause. In fact, virtue-theoretic views that don't invoke the attribution relation, but merely require that one's belief has to be the product of one's cognitive abilities, also fall under the scope of the views I wish to discuss. 15 What all of these views share is the idea that knowledge requires the use of cognitive abilities. In order to make use of a cognitive ability one must possess that ability. But under what kind of conditions does one possess a certain ability?
In a broadly Aristotelian spirit, virtue epistemologists think that a reliable doxastic disposition can count as a cognitive virtue only if it's a proper part of one's virtuous epistemic character (Greco, 1999, p. 287;2010, p. 150;Palermos, 2014Palermos, , p. 1940Pritchard, 2012, p. 262). 1617 This is what differentiates virtue reliabilism form process reliabilism. Virtue reliabilists require that the reliable processes be properly grounded in the subject in order to be knowledge-conducive. According to them, not all reliable doxastic dispositions count as cognitive abilities. If Alvin has a brain lesion that causes him to believe that he has a brain lesion, his belief is the product of an extremely reliable doxastic disposition, but it's not a product of his cognitive 14 For instance, Turri (2011), Littlejohn (2014, and Kelp (2017) invoke the notion of manifestation of a disposition in their understanding of the attribution clause. Lackey (2007Lackey ( , 2009) argues on the basis of testimonial cases of knowledge that knowing doesn't require that one's cognitive success be attributable to one's cognitive character. It would seem that Sosa (2007, pp. 95-96;2011, p. 87) has the means to deal with Lackey's objections, but it's not clear whether Greco does. In fact Greco (2012) has changed his view in light of Lackey's apt criticism. I think that Greco's new proposal is better suited to deal with Lackey's objections. For discussion of different ways to understand credit, see Hirvelä and Lasonen-Aarnio (forthcoming). 15 See Hirvelä (2018Hirvelä ( , 2019a and Beddor and Pavese (2020) for a virtue-theoretic view that doesn't invoke the attribution relation. The virtue-theoretic condition that Pritchard (2012) endorses does demand that the agent's cognitive success be of credit to her, and hence is logically stronger. 16 Knowledge first virtue reliabilists think that the relevant cognitive abilities are abilities to know, whereas those virtue reliabilists who have reductive ambitions understand such abilities as abilities to gain or maintain true beliefs. In what follows we can remain neutral on this score. 17 Is the notion of character essential in virtue epistemology? Perhaps not. Sylvan (2017), drawing on the work of Thomson (1997) and Hurka (2006), develops an intriguing virtue responsibilist view that takes act-attaching virtue properties to be fundamental, rather than character-attaching virtue properties. This kind of virtue theory is outside the scope of my argument.
abilities, because the brain lesion isn't a part of Alvin's cognitive character (Breyer & Greco, 2008, p. 174;Greco, 2010, p. 151;Palermos, 2014Palermos, , p. 1938. 18 But under what conditions is a reliable doxastic disposition a proper part of one's cognitive character? At least three conditions have been proposed by virtue reliabilists; (1) that the disposition is stable, (2) that it's not strange, and, (3) that it's integrated into the subject's cognitive character (Greco, 2010, p. 150). The central idea behind these conditions is to ensure that in order for a doxastic disposition to be a part of one's cognitive character it has to be the agent's disposition. Beliefs that are products of such cognitive abilities are in a sense owned by the subject, in that she is responsible for those beliefs and can be properly blamed or credited for having those beliefs. I'll focus on condition (3), since it seems to be the most central one, and is more widely endorsed than conditions (1) and (2). 19 What suffices for cognitive integration varies from case to case. In some extreme cases, like in the brain lesion case, reflective endorsement of the truth-conduciveness of the disposition might be required (Pritchard, 2010). If, for instance, Alvin went to see a doctor who told him that he suffers from an extremely rare brain lesion that causes one to believe that one suffers from a brain lesion, the doxastic disposition generated by the brain lesion could become a part of Alvin's cognitive character. But this kind of reflective endorsement is almost never required in more mundane cases. Doxastic dispositions that are innate, or otherwise naturally developed, are integrated into our cognitive system via subconscious mechanisms in virtue of constantly confirming each other's outputs. Consider for example the following description of Edgar's afternoon: Edgar sees a beautiful pint of ale and can smell the overwhelming aroma of the hops. He can feel the cold glass in his hand and sipping the beer, finds delightful notes of pine, citrus and tropical fruits. Pricking his ears he can even hear the dense head slowly dissolving, and thinks: "I'm drinking ale today". All of these experiences confirm to Edgar that there's a pint of ale on the table.
In Edgar's case all of his sensory modalities part-take in confirming a single proposition. Of course this isn't always the case. For many sensible qualities it applies that they can be sensed directly only via some particular sense modality. No one can hear the redness of the wall. Its redness can only be seen. However, many of our experiences are multi-modal in that multiple sense modalities are responsible for our phenomenological state. And it's not just the case that our sense modalities confirm the outputs of each other. Rather, in many cases our sense modalities affect the outputs and operation of our other sense modalities. 20 A minimal externalist condition for cognitive integration is that the cognitive abilities act in concert with each other. Greco (2010, p. 152) writes that "cognitive integration is a function of cooperation and interaction, or cooperative interaction, with other aspects of the cognitive system." Palermos (2014Palermos ( , pp. 1941Palermos ( -1942 holds that "the only necessary and sufficient condition for a process to count as knowledge-conducive is that it cooperatively interacts with the rest of the agent's cognitive character. [The] process of cognitive integration gives rise to a coherentist effect both on the level of processes (how the beliefs are generated) and on the level of content (how the beliefs themselves combine)." Pritchard (2010, pp. 147-148) holds that a doxastic disposition D is integrated to the subject's cognitive character only if beliefs gained via a D have cohered with the beliefs formed via the subject's other cognitive abilities, and that if they had not, then the subject would have responded accordingly. Sosa argues also that knowledge never arises purely from one faculty, but from the interplay of cognitive faculties. 21 He writes: Note that no human blessed with reason has merely animal knowledge of the sort attainable by beasts. For even when perceptual belief derives as directly as it ever does from sensory stimuli, it is still relevant that one has not perceived the signs of contrary testimony. A reason-endowed being automatically monitors his background information and his sensory input for contrary evidence and automatically opts for the most coherent hypothesis even when he responds most directly to sensory stimuli.
[…] The beliefs of a rational animal hence would seem never to issue from unaided introspection, memory, or perception. For reason is always at least a silent partner on the watch for other relevant data, a silent partner whose very silence is a contributing cause of the belief outcome. (Sosa, 1991, p. 240) This kind of minimal integration doesn't require any perspective on the truth conduciveness of the dispositions. The only thing that is required is that the doxastic dispositions that make up one's virtuous cognitive character are not acting in conflict with each other. Hence we are able to lay down the following condition for minimal cognitive integration: INTEGRATION: Subject S's doxastic disposition D is integrated with her cognitive character only if D would act in concert with the set of doxastic dispositions D* that together with D make up S's cognitive character if D were triggered while both D and D* are in appropriate conditions. Given INTEGRATION, a reliable doxastic disposition can qualify as a cognitive ability just in case it acts, or would act, in concert with one's cognitive character, while in appropriate conditions. 22 According to virtue reliabilism only beliefs 22 Note that it is not enough that it would be merely probable that the disposition acts in concert with one's cognitive character. Virtue reliabilists have at least two reasons why they should not opt for a weaker reading of 'would' in INTEGRATION. First, INTEGRATION demands that the relevant dispositions act in concert with each other when triggered while in appropriate conditions. Many virtue reliabilists understand appropriate conditions in terms of normal conditions, or conditions that are otherwise suitable for the exercise of the ability in question (Beddor & Pavese 2020;Greco 2010;Sosa 2010). Therefore, INTEGRATION is already effectively weakened in that it requires only that the dispositions would normally act in concert with each other. Second, if a doxastic disposition D could be a part of S's cognitive character even though it would be merely probable that it acts in concert with S's cognitive character while in appropriate conditions, then the performances that D would issue which were in tension with S's cognitive character would be attributable to S, since they would be manifestations of her cognitive abilities. But within the literature on attributability, even outside virtue epistemology, it is commonplace to think that an act is attributable to an agent "just in case it expresses the agent's deep 21 Thanks to Kurt Sylvan for pointing me towards relevant passages in Sosa's work. gained via cognitive abilities can have positive epistemic statuses like justification or knowledge. Knowledge and justification require a kind of coherence of one's cognitive faculties. For a subject to be eligible for such normative statuses she must keep her cognitive home in order. 23 One might object that INTEGRATION is too strong. Even though I know by testimony that the Müller-Lyer lines are equally long I still see them as of different lengths. When in the grips of the Müller-Lyer illusion my eyesight doesn't seem to act in concert with the other doxastic dispositions that make up my cognitive character. But here it's important to note that I don't form the belief that the lines are of different lengths on the basis of my perceptual experience when I know that they are of the same length. The fact that I don't form the belief is evidence that my eyesight is acting in concert with my cognitive character, since the knowledge that I've gained through my other cognitive faculties prevents me from forming a belief that corresponds to the experiential-state generated by my eyesight.
It would be good if we could say more about what it takes for two doxastic dispositions to act in concert with each other. Sadly virtue reliabilists have been largely silent on this issue. What seems clear, however, is that two doxastic dispositions can act in concert with each other just in case the dispositions are appropriately sensitive to each other's outputs. It's clear that in cases where the dispositions generate beliefs that are logically inconsistent, the dispositions are not sensitive to each other's outputs. But while logical inconsistency of the outputs suffices to show that the doxastic dispositions are not properly integrated with the subject's cognitive character, it cannot be a necessary condition. If the doxastic disposition D generates in me the belief that p and another doxastic disposition D* generates in me the belief [I don't know that p] then D and D* are acting in tension with each other, even though p and [I don't know that p] are not logically inconsistent.
A tempting way to explain the tension between D and D* is to appeal to the fact that the outputs that they generated cannot amount to knowledge simultaneously. We could then claim that two doxastic dispositions are acting in concert with each other only if it's possible that the outputs amount to knowledge on the condition that both outputs are true. This constraint on cognitive integration is supported by the idea that knowledge is the norm of belief. 24 The purpose of our cognitive abilities is to provide a unified picture of the world that amounts to knowledge. If our doxastic dispositions are acting against each other in such a way that achieving this aim is impossible, then at least some of those doxastic dispositions are not integrated with our cognitive character. 23 I argue elsewhere (2020) that if knowledge requires employing cognitive abilities that are integrated to our cognitive character, then modal conditions for knowledge which are relativized to such abilities are not hostage to the possible truth of the extended mind thesis. 24 The knowledge norm of belief has been endorsed by Williamson (2000) and Sosa (2011) among many others. Footnote 22 (continued) self" Shoemaker (2015, p. 59). But how could a performance that is out of character express, or reveal the agent's cognitive character? I contend that it could not. I would like to thank an anonymous reviewer at Erkenntnis for raising this issue.
However, by saying that two doxastic dispositions can act in concert with each other just in case the outputs they yield could have amounted to knowledge simultaneously threatens to make virtue reliabilism a circular theory of knowledge. While this would probably suit knowledge first virtue reliabilists like Kelp and Miracchi, it's doubtful whether those who aim to provide a reductive virtue-theoretic analysis of knowledge should understand cognitive integration in this way.
But here it's important to note that we need not commit ourselves to the idea that cognitive integration should ultimately be understood in terms of knowledge. Rather, we can only note that when it's in principle impossible that the two outputs could have amounted to knowledge if they were true, then the doxastic dispositions that produced the outputs are not acting in concert. True, we will use our pre-theoretic understanding of knowledge when determining whether a doxastic disposition is integrated to the subject's cognitive character, as does Williamson (2000) when he uses our pre-theoretic understanding of knowledge to determine whether a belief is safe. But this need not make virtue reliabilism a circular theory of knowledge. Virtue reliabilists are still free to unpack the notion of cognitive integration without appealing to knowledge. All we require here is that the way in which virtue reliabilists end up unpacking cognitive integration entails that two doxastic dispositions that are acting in a 'knowledge-inconsistent way' are not acting in concert with each other.
Finally, it's worth keeping in mind that virtue reliabilists relativize cognitive abilities to normal or appropriate conditions and environments (Greco, 2010;Sosa, 2010). This means that cognitive abilities can be lost when moving to environments that are not suitable for the use of those abilities. The fact that one's doxastic dispositions don't act in concert in some such conditions and environments doesn't mean that those doxastic dispositions wouldn't qualify as cognitive virtues in more suitable environments and conditions, where the doxastic dispositions in question are in the market of being cognitive abilities. This helps to alleviate the pressure to think that INTEGRATION is a too strong condition.
In the next section we examine under what kind of conditions a belief can serve as a defeater.
Defeat and Justification
I'll assume that only those beliefs that have a positive epistemic status can serve as defeaters. I think that this positive epistemic status is justification. Irrational and unjustified beliefs cannot serve to defeat knowledge or justified beliefs. I take this to 1 3 be the mainstream position among epistemologists, 25 but it'll be useful to go through the rationale for this position, since it plays a pivotal role in the next section.
Often some of our beliefs confer justification on our other beliefs. The fact that I know that the drink is laced with hemlock justifies me in believing that the drink is poisonous. In this case my knowledge entails the truth of the latter belief. But if I believed out of sheer paranoia that the drink is laced with hemlock, I wouldn't be justified in believing that the drink is poisonous. While the contents of my beliefs in the above cases stand in exactly the same logical relations, my belief that the drink is poisonous isn't justified in the latter case, since there is no justification to be transmitted from my belief that the drink is laced with hemlock. Similarly, if I were to believe out of wishful thinking that England is going to lose the game, I wouldn't thereby be justified in believing that Italy is going to win the game. If justified beliefs could be built on paranoia and wishful thinking living a good epistemic life would be all too easy.
Given that irrational and unjustified beliefs cannot confer positive epistemic statuses on our other beliefs, it would be prima facie bizarre if they could render our justified beliefs unjustified. How could they have only this kind of negative epistemic import? Moreover, if irrational and unjustified beliefs can defeat justified beliefs, then they can also serve to restore the justificatory status of beliefs (Casullo, 2018). 26 This is because a putative defeater d can be defeated by yet another putative defeater d', rendering the original belief justified once again (Pollock, 1987). One shouldn't be able to restore the justificatory status of a defeated belief by irrationally believing that the putative defeater doesn't defeat one's original belief. Otherwise irrational and unjustified beliefs can confer justification to our beliefs. Therefore, only justified beliefs can serve as defeaters.
Virtue reliabilists think that a subject S's belief is justified if, and only if it's an exercise of S's cognitive abilities (Greco, 2002, p. 311;Kelp, 2017, p. 238;Miracchi, 2015, p. 48;Sosa, 1991, p. 189). Given that beliefs need to be justified in order to serve as defeaters, a defeater-belief must be a product of one's cognitive abilities.
3
A Virtue Reliabilist Error-Theory of Defeat
Defeat of the Virtues?
So far I've shown that virtue reliabilists are committed to the idea that knowledge arises from exercises of cognitive abilities and that a doxastic disposition can qualify as a cognitive ability only if it's suitably integrated with the cognitive character of the subject. I've also explained that in order for a putative defeater to have potential normative import, it must be the case that the defeater enjoys a positive epistemic standing. I assume that the defeater has to be justified in order to have potential normative import. On virtue reliabilism justified beliefs are exercises of cognitive abilities. Therefore, the defeater belief has to be an exercise of a cognitive ability in order to have potential normative import. Given this, what must virtue reliabilists say about the phenomenon of knowledge defeat?
Consider a paradigmatic case of knowledge defeat like Red light: At t 1 Yen comes to know that the wall in front of her is red via perception in optimal conditions. At t 2 Yen's trusted friend Triss tells her that the wall is illuminated by red light that would have made the wall look red even if it had been of some other colour.
In order for Red light to be a potential case of knowledge defeat it must be the case that Yen's belief that the wall is red is a product of her cognitive abilities. Otherwise her belief could not have qualified as knowledge at t 1 . It must also be the case that her belief that [the wall is illuminated by red light that would have made the wall look red whatever its actual colour is] is a product of her cognitive abilities since otherwise the defeater belief wouldn't be justified, and hence wouldn't have any defeating force. Now suppose that Yen dogmatically clings to her belief that the wall is red after forming a justified belief in the defeater. Given that the defeater supports that her original belief doesn't qualify as knowledge, what should virtue reliabilists say about this case? I think that virtue reliabilists are committed to claiming that the case, when described in this way, is metaphysically impossible. It cannot be the case that both Yen's original belief and her defeater belief are products of her cognitive abilities. Why? Because the doxastic dispositions that generate these beliefs are clearly acting in tension, rather than in concert with each other. After all, the way in which Yen believes that the wall is red can only constitute knowledge if her defeater belief isn't knowledge and vice versa. This is because if Yen knows by visual perception alone that the wall is red at t 1 it cannot be the case that wall is bathed in red light at t 1 , because then the colour that Yen would have seen would be that which the red light cast on the wall and not the redness of the wall. The truth of Yen's perceptual belief would lack an appropriate causal connection to what makes it true. Similarly, Yen cannot know via testimony that the wall was bathed in red light at t 1 if she knew by visual perception alone that the wall is red at t 1 . After all, if the wall was bathed in red light at t 1 the truth of Yen's perceptual belief would have lacked an appropriate casual connection to what makes it true, and hence she couldn't have known by visual perception that the wall is red. So while the contents of the doxastic outputs are not logically inconsistent, the ways in which the beliefs are formed are epistemically inconsistent in that both beliefs could not have constituted knowledge simultaneously.
Recall that INTEGRATION requires that the subject's doxastic dispositions would act in concert with the other doxastic dispositions that make up the subject's cognitive character if it were triggered. In cases where the subject dogmatically clings onto her belief after forming a justified belief in the putative defeater, this counterfactual is false. Importantly, the counterfactual was already false at the moment when the subject formed her original belief, and hence the subject's original belief was not a product of her cognitive abilities, and cannot qualify as knowledge. Therefore, if the defeater belief is justified, and the subject holds onto her original belief after receiving the putative defeater, her original belief never amounted to knowledge to begin with. Since the subject never acquires knowledge in this first variant of the case, there is no knowledge defeat.
Alternatively, it could be the case that Yen's defeater belief isn't a product of her cognitive abilities, in which case the belief would be unjustified. But if it's true that unjustified beliefs cannot serve as defeaters Yen doesn't have a defeater for her belief that the wall is red. Moreover, since Yen's defeater belief isn't a product of her cognitive abilities, the doxastic dispositions that help to constitute her cognitive character are not acting in tension with each other if she holds onto her original belief. Therefore, Yen can know that the wall is red. And since Yen can continue to know in this second variant of the case that the wall is red after t 2 there is no knowledge defeat in this variant either.
But suppose that instead of dogmatically holding onto her belief, Yen suspends judgment after having formed a justified belief in the defeater. In this third variant Yen's original belief might have amounted to knowledge, since the cognitive abilities that are responsible for her perceptual belief are acting in concert with her cognitive character. In this version Yen is acting in the same way as the subject who cannot fail to see the Müller-Lyer lines as being of different lengths but nevertheless doesn't believe that they are of different lengths after having learned, perhaps by testimony, that the Müller-Lyer lines constitute a known illusion. But while Yen's original belief and her defeater belief might be products of her cognitive abilities in this variant of the case, there is no knowledge defeat in this case either. It's true that she doesn't know that the wall is red after having received the putative defeater, but this is because she doesn't believe that the wall is red after having received it. It's not the defeater that robs her of knowledge; it's her lack of belief. Below is a table that summarizes these different variants. But while knowledge defeat doesn't occur in any of the three variants, it's not impossible for Yen to lose her knowledge while holding onto her original belief. If Yen were to rebase her belief that the wall is red at t 2 , and the doxastic disposition responsible for the rebasing was not a cognitive ability, she would fail to know that the wall is red at t 2 , even though she would still believe that the wall is red. But here it's important to recall that whether Yen rebases her belief at t 2 is a contingent matter. And since it's a contingent matter, Yen doesn't necessarily lose her knowledge after having acquired the putative defeater. It could also be the case that Yen's cognitive character changes between t 1 and t 2 in such a way that the doxastic disposition that generated the belief that the wall is red no longer counts as a cognitive ability at t 2 . 27 In this variant of the case Yen could have known at t 1 that the wall is red, but doesn't know it at t 2 , since the doxastic disposition in charge of retaining the belief doesn't qualify as a cognitive ability at t 2 . But again, it's a contingent matter whether Yen's cognitive character changes between t 1 and t 2 , and hence the fact that knowledge is lost in this variant of the case doesn't suffice to show that Yen's of knowledge is defeated.
Recall that knowledge defeat occurs just in case a subject knows that p, gains a putative defeater for her belief that p, and thereby loses her knowledge that p without necessarily losing her belief that p. The defeatists claim that acquiring the putative defeater suffices on its own to defeat one's knowledge. But knowledge defeat doesn't occur in any of the five variants of Red light that we just considered. In the first version Yen never knew, in the second one she never gained a defeater, and in the third one she lost knowledge only because she lost the relevant belief. In the last two variants Yen does lose her knowledge without losing the corresponding belief. But this is only because she either (1) starts believing that the wall is red via a method of belief-formation that isn't a cognitive ability, or, (2) her cognitive character changes in such a way that the way in which she formed her belief originally no longer counts as a cognitive ability. Yen doesn't lose her knowledge in any of these variants solely in virtue of having acquired a putative defeater for her belief.
So knowledge defeat, strictly speaking, is an illusory phenomenon. There are no cases where acquiring a putative defeater for a belief that qualifies as knowledge suffices on its own to defeat the belief's epistemic standing. But while knowledge defeat turns out to be an illusory phenomenon on the sketched account, it's nevertheless true that there are no cases where a subject knows that p after having acquired a putative defeater for her belief that p. Thus virtue reliabilists are able to explain intuitions of knowledge defeat, without granting that knowledge defeat is a genuine phenomenon. Virtue reliabilism provides an error-theory of our defeat intuitions. It's an error-theory in two senses. First, it claims that in some putative cases of knowledge defeat, knowledge was never had to begin with. Second, the potential loss of knowledge isn't explained in terms of the putative defeater's normative force, but rather via the way in which the subject reacted to her epistemic situation. 28 Finally, it's worth noting that this error-theory can explain why suspending judgment is nevertheless epistemically speaking good, even though putative defeaters lack normative force. Suspending judgment is epistemically optimal, because only in those cases where the subject suspends her judgment is it possible that both her original, and her defeater belief, were justified (variant 3 above). 29 Here's an objection I've heard against the theory proposed (voiced by Maria Lasonen-Aarnio, among others). Intuitively Yen knows in Red light that the wall is red at t 1 even if she would dogmatically cling onto her belief if she later gained a justified belief that is a putative defeater for her original belief (variant 1 above). Yen's dogmatism is a vice of her epistemic character that doesn't stain her belief, the objection goes. I grant the objector that intuitively Yen knows at t 1 that the wall is red. While it might be unintuitive that Yen's dogmatism would preclude her from knowing that the wall is red, virtue reliabilists are committed to this claim. They hold that knowledge and justification can only arise from the exercise of cognitive abilities that are integrated to one's cognitive character. 30 Virtue reliabilists can explain the intuition that Yen knows that the wall is red at t 1 . In all but variant 1 Yen does know that the wall is red at t 1 . It's easy to mix up the variants since information regarding Yen's dogmatic character is revealed only later. Furthermore, variant 1 is, perhaps, the most unnatural way of fleshing out the case. Most people would withdraw their belief if they were presented with a putative defeater. It's plausible that we implicitly assume that Yen is non-dogmatic when originally evaluating whether Yen knows at t 1 . First impressions are hard to shake, especially when it comes to intuitions. That said, those who think that the objection is successfully, are invited to see this paper as offering an argument against virtue reliabilism. My aim was to examine what virtue reliabilists ought to say about defeat given some of their core commitments.
Other Virtue Theoretic Proposals
I will briefly consider some alternative solutions to the problem of knowledge defeat, put forth by virtue reliabilists. In order to deal with defeaters Greco (2010) adds a subjective justification condition to his analysis of knowledge. According to Greco subject S's belief that p is subjectively justified "if and only if S's believing that p is properly motivated; if and only if S's believing that p results from intellectual dispositions that S manifests when S is motivated to believe the truth" (2010, p. 167). 29 Neta (2002, pp. 675-676) has provided a contextualist theory of knowledge that yields an account of defeat that bears some similarity to the account proposed here, in that according to it acquiring new evidence cannot on its own defeat knowledge. I'd like to thank an anonymous reviewer at Erkenntnis for pointing this out. 30 Hurka (2006), Sylvan (2017) and Lasonen-Aarnio (forthcoming-a) have criticized character-attaching virtue theories, like virtue reliabilism, for requiring that virtuous acts must arise from virtuous character.
Footnote 28 (continued) once, but then, because usually trustworthy S lied to me, I stopped knowing it." I would like to thank an anonymous reviewer at Erkenntnis for alerting me to Azzouni's work.
Greco's argument as to why knowledge entails subjective justification is motivated by his take on Aristoteles's virtue ethics. According to him virtuous action requires not only that the action arises from a virtuous character trait, but also that the action is properly motivated by one's virtuous character (Greco, 2010, p. 43).
I won't take issue with the difficult question under what conditions a subject is properly motivated to believe the truth, nor with Greco's motivation to add a subjective justification component to his analysis of knowledge. For the sake of the argument, I'll also grant that in putative cases of knowledge defeat Greco's subjective justification condition isn't satisfied and that knowledge is hence lost in such cases. I only wish to note that the kind of virtue reliabilism that Greco endorses already has the necessary tools to explain our intuitions of knowledge defeat. Adding a subjective justification condition to the analysis isn't necessary and achieves nothing on this score. To me, adding this condition seems like an extra cost. Pritchard (2018) has argued that his anti-luck virtue epistemology can account for the phenomenon of knowledge defeat. He claims that a subject who comes to know that [that's a barn] in an area with no barn facades around, loses her knowledge if she sees a sign that says that she is in the barn façade-county. He writes that "the safety of her cognitive success is now in despite of her manifestation of relevant cognitive agency, rather than being to any significant degree because of it" (2018, p. 3075). Assuming that her belief that [that's a barn] is a product of her cognitive abilities, I fail to see why the subject's safe cognitive success wouldn't be to a significant degree because of the exercise of her cognitive abilities. After all, the fact that the subject trusts her perception seems to explain precisely why she continues to have a safe belief. Pritchard needs to tell us more about why the subject's safe cognitive success isn't attributable to her cognitive agency in cases like this, if his explanation of knowledge defeat is to succeed. Moreover, if I am correct, Pritchard already has the necessary tools to accommodate our intuitions of knowledge defeat.
As far as I know, Sosa has not addressed the problem of knowledge defeat in print. However, given his distinction between animal and reflective knowledge, he could perhaps adopt the following view. 31 In putative cases of knowledge defeat one's original belief retains its aptness, and hence it amounts to animal knowledge. However, once the defeater is introduced, the subject can no longer aptly take her belief to be apt, which is what reflective knowledge would require (Sosa, 2011, p. ch. 1). Reflective knowledge requires that one competently assesses the risk of forming a false belief to be low enough, and arguably, one cannot have competently assessed the risk to be low enough if one has a defeater for one's belief. Therefore, defeaters would destroy reflective knowledge, but leave animal knowledge intact. This error-theoretic account of knowledge defeat rests on Sosa's distinction between animal and reflective knowledge. Since the error-theory that I gave is derivable from the core tenets of virtue reliabilism, it's simpler than the possible account that Sosa's more complicated framework could yield. Moreover, if I am correct, Sosa has the resources to accommodate our intuitions of knowledge defeat without resorting to his distinction between animal and reflective knowledge.
3
To wrap up, the error-theory that I have presented is preferable to extant virtue reliabilist accounts of defeat since it is simpler than those accounts and stems from the core ideas of virtue reliabilism. Virtue reliabilists need not add bells and whistles to explain defeatist intuitions.
Conclusions
I argued that virtue reliabilism is able to explain our defeat intuitions via a new kind of error-theory that falls directly out of the core tenets of virtue reliabilism. According to the error-theory, in paradigmatic cases of knowledge defeat where the subject holds onto her belief, the subject never knew to begin with. In cases where the subject suspends her judgment upon receiving the defeater she might have originally known, but doesn't anymore, since she lacks the relevant belief. In neither case is knowledge lost solely in virtue of the fact that the subject acquired a defeater, and hence knowledge defeat is an illusory phenomenon. Nevertheless, the defeatists are right in claiming that there are no cases where a subject retains her knowledge of p after having acquired a defeater for p. | 10,402 | 2021-09-18T00:00:00.000 | [
"Philosophy"
] |
"Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Trai(...TRUNCATED) | 5,805.8 | 2021-11-03T00:00:00.000 | [
"Computer Science"
] |
"Facile Synthesis, Characterization, and Photocatalytic Performance of BiOF/BiFeO 3 Hybrid Heterojun(...TRUNCATED) | 5,441.8 | 2023-03-21T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Environmental Science"
] |
"Influence of water soaking on swelling and microcharacteristics of coal\n\nImproving the coal seam (...TRUNCATED) | 5,706.4 | 2019-10-20T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
"Resiliency of healthcare expenditure to income shock: Evidence from dynamic heterogeneous panels\n\(...TRUNCATED) | 8,122.8 | 2023-03-07T00:00:00.000 | [
"Economics"
] |
"Concentration and Poincar\\'e type inequalities for a degenerate pure jump Markov process\n\nWe stu(...TRUNCATED) | 7,092.4 | 2018-03-28T00:00:00.000 | [
"Mathematics"
] |
"Co-administration of either curcumin or resveratrol with cisplatin treatment decreases hepatotoxici(...TRUNCATED) | 5,724.6 | 2024-07-22T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry"
] |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 15