Are there logicians who argue that knowing and believing are NOT amenable to formal study via modal logics?












1














Vincent Hendricks and John Symons notes the following about epistemic logic:




Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




However, he also notes the problem of logical omniscience implied in the K axiom:




A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.










share|improve this question



























    1














    Vincent Hendricks and John Symons notes the following about epistemic logic:




    Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




    However, he also notes the problem of logical omniscience implied in the K axiom:




    A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



    Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




    He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



    Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





    Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.










    share|improve this question

























      1












      1








      1







      Vincent Hendricks and John Symons notes the following about epistemic logic:




      Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




      However, he also notes the problem of logical omniscience implied in the K axiom:




      A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



      Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




      He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



      Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





      Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.










      share|improve this question













      Vincent Hendricks and John Symons notes the following about epistemic logic:




      Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study.




      However, he also notes the problem of logical omniscience implied in the K axiom:




      A particularly malignant philosophical problem for epistemic logic is related to closure properties. Axiom K, can under certain circumstances be generalized to a closure property for an agent's knowledge which is implausibly strong — logical omniscience:



      Whenever an agent c knows all of the formulas in a set Γ and A follows logically from Γ, then c also knows A.




      He mentions logicians such as Hintikka and Rantala who appear to offer ways around this problem.



      Are there logicians who claim there is no way around the problem of logical omniscience and reject the view that expressions such as 'knows that' or 'believes that' are amenable to formal study?





      Hendricks, Vincent and Symons, John, "Epistemic Logic", The Stanford Encyclopedia of Philosophy (Fall 2015 Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/fall2015/entries/logic-epistemic/.







      logic epistemology modal-logic






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked 9 hours ago









      Frank Hubeny

      6,89651344




      6,89651344






















          1 Answer
          1






          active

          oldest

          votes


















          2














          Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and complete they must include all the consequences along with their premises. So if we are describing acquisition of knowledge as elimination of uncertainty by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



          Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



          This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and it introduces an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



          There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong even argues that few interesting concepts are analyzable in the sense of analytic philosophy. But to say that it is unanalyzable is not to say that it is not amenable to formal study, Euclid's lines and points, or sets and elements of set theory are also unanalyzable. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



          Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. For the most radical theses you'd have to go to the more continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analysis in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






          share|improve this answer























            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "265"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphilosophy.stackexchange.com%2fquestions%2f59230%2fare-there-logicians-who-argue-that-knowing-and-believing-are-not-amenable-to-for%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and complete they must include all the consequences along with their premises. So if we are describing acquisition of knowledge as elimination of uncertainty by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



            Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



            This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and it introduces an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



            There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong even argues that few interesting concepts are analyzable in the sense of analytic philosophy. But to say that it is unanalyzable is not to say that it is not amenable to formal study, Euclid's lines and points, or sets and elements of set theory are also unanalyzable. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



            Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. For the most radical theses you'd have to go to the more continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analysis in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






            share|improve this answer




























              2














              Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and complete they must include all the consequences along with their premises. So if we are describing acquisition of knowledge as elimination of uncertainty by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



              Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



              This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and it introduces an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



              There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong even argues that few interesting concepts are analyzable in the sense of analytic philosophy. But to say that it is unanalyzable is not to say that it is not amenable to formal study, Euclid's lines and points, or sets and elements of set theory are also unanalyzable. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



              Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. For the most radical theses you'd have to go to the more continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analysis in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






              share|improve this answer


























                2












                2








                2






                Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and complete they must include all the consequences along with their premises. So if we are describing acquisition of knowledge as elimination of uncertainty by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



                Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



                This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and it introduces an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



                There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong even argues that few interesting concepts are analyzable in the sense of analytic philosophy. But to say that it is unanalyzable is not to say that it is not amenable to formal study, Euclid's lines and points, or sets and elements of set theory are also unanalyzable. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



                Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. For the most radical theses you'd have to go to the more continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analysis in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".






                share|improve this answer














                Logical omniscience was always only a technical problem related to formalization of epistemic logic in terms of possible worlds. Since classical possible worlds are supposed to be consistent and complete they must include all the consequences along with their premises. So if we are describing acquisition of knowledge as elimination of uncertainty by ruling out possible worlds incompatible with it (Hintikka), we must "know" all the logical consequences. Hintikka called it the "scandal of deduction": it is absurd that we must know whether the Riemann hypothesis is true just because we know the axioms of set theory.



                Hintikka's own solution was to stratify consequences according to their "depth", the number of extra quantifier layers introduced to derive them, and describe "deep" consequences as inaccessible. New quantifier layers correspond to new objects not mentioned in the initial premises, a simple example are auxiliary constructions needed to derive geometric theorems. Euclid's demonstrations are by no means trivial.



                This turned out to be too simplistic for two reasons: Hintikka's depth does not fully capture the complexity of deriving consequences, and it introduces an arbitrary cut-off on what is accessible. The first problem was solved by introducing a second depth, roughly the number of layers of additional assumptions admitted and discharged in deriving the consequences, see D'Agostino's Philosophy of Mathematical Information. The second problem was solved by introducing vague accessibility. A full technical solution in terms of classically impossible (open) possible worlds is outlined e.g. in Jago's Logical Information and Epistemic Space. To summarize, logical omniscience is no longer considered a problem.



                There is, however, a much bigger obstacle to analysis of knowledge, at least along the traditional lines of Plato's justified true belief, - the Gettier problem of epistemic luck. It is easy to give examples where justification, while reasonable, has little to do with the truth of the belief, so the "knowledge" becomes a lucky accident (as in subjects in the Matrix believing they have limbs). Although some technical workarounds are known, see SEP's Reliabilist Epistemology, it is widely believed that the underlying problem is intractable. Zagzebski in The Inescapability of Gettier Problems suggests that knowledge is unanalyzable, and Fodor in Concepts: Where Cognitive Science Went Wrong even argues that few interesting concepts are analyzable in the sense of analytic philosophy. But to say that it is unanalyzable is not to say that it is not amenable to formal study, Euclid's lines and points, or sets and elements of set theory are also unanalyzable. Williamson, a modal arch-formalist, is also a proponent of "knowledge first", knowledge as a basic notion. There is a good overview of various approaches in SEP's Analysis of Knowledge.



                Short in Peirce's Theory of Signs outlines a non-possible worlds approach to modality and knowledge based on Peirce's "vague descriptions", but it is not very developed. For the most radical theses you'd have to go to the more continental anti-formalization thinkers, especially Heidegger comes to mind. Somewhere in between are proponents of semiotics and hermeneutics, who favor more informal analysis in terms of meaning, understanding and interpretation over formal epistemology, classical names are Dilthey and Husserl. See also What are the differences between philosophies presupposing one Logic versus many logics? on the older, non-formal, sense of "logic".







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited 4 hours ago

























                answered 4 hours ago









                Conifold

                35k251138




                35k251138






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Philosophy Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphilosophy.stackexchange.com%2fquestions%2f59230%2fare-there-logicians-who-argue-that-knowing-and-believing-are-not-amenable-to-for%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Morgemoulin

                    Scott Moir

                    Souastre