What is the convergence hypothesis? - Quora
The problem of the priors identifies an important issue between theSubjective and Objective Bayesians. If the constraints on rationalinference are so weak as to permit any or almost any probabilisticallycoherent prior probabilities, then there would be nothing to makeinferences in the sciences any more rational than inferences inastrology or phrenology or in the conspiracy reasoning of a paranoidschizophrenic, because all of them can be reconstructed as inferencesfrom probabilistically coherent prior probabilities. Some SubjectiveBayesians believe that their position is not objectionably subjective,because of results (e.g., Doob or Gaifman and Snir) proving that evensubjects beginning with very different prior probabilities will tendto converge in their final probabilities, given a suitably long seriesof shared observations. These convergence results are not completelyreassuring, however, because they only apply to agents who alreadyhave significant agreement in their priors and they do not assureconvergence in any reasonable amount of time. Also, they typicallyonly guarantee convergence on the probability of predictions, not onthe probability of theoretical hypotheses. For example, Carnap favoredprior probabilities that would never raise above zero the probabilityof a generalization over a potentially infinite number of instances(e.g., that all crows are black), no matter how many observations ofpositive instances (e.g., black crows) one might make without findingany negative instances (i.e., non-black crows). In addition, theconvergence results depend on the assumption that the only changes inprobabilities that occur are those that are the non-inferentialresults of observation on evidential statements and those that resultfrom conditionalization on such evidential statements. But almost allsubjectivists allow that it can sometimes be rational to change one'sprior probability assignments.
09/06/2014 · What is the convergence hypothesis
In the limit, an Objective Bayesian would hold that rationalconstraints uniquely determine prior probabilities in everycircumstance. This would make the prior probabilities logicalprobabilities determinable purely a priori. None ofthose who identify themselves as Objective Bayesians holds thisextreme form of the view. Nor do they all agree on precisely what therational constraints on degrees of belief are. For example,Williamson does not accept Conditionalization in any form as arational constraint on degrees of belief. What unites all of theObjective Bayesians is their conviction that in many circumstances,symmetry considerations uniquely determine the relevant priorprobabilities and that even when they don't uniquely determine therelevant prior probabilities, they often so constrain the range ofrationally admissible prior probabilities, as to assure convergence onthe relevant posterior probabilities. Jaynes identifies four generalprinciples that constrain prior probabilities, group invariance,maximium entropy, marginalization, and coding theory, but he does notconsider the list exhaustive. He expects additional principles to beadded in the future. However, no Objective Bayesian claims that thereare principles that uniquely determine rational prior probabilities inall cases.
B. The problem of old evidence. On a Bayesianaccount, the effect of evidence E in confirming (ordisconfirming) a hypothesis is solely a function of the increase inprobability that accrues to E when it is first determined tobe true. This raises the following puzzle for Bayesian ConfirmationTheory discussed extensively by Glymour: Suppose that E is anevidentiary statement that has been known for some time — thatis, that it is old evidence; and suppose that H is ascientific theory that has been under consideration for some time. Oneday it is discovered that H implies E. In scientificpractice, the discovery that H implied E wouldtypically be taken to provide some degree of confirmatory support forH. But Bayesian Confirmation Theory seems unable to explainhow a previously known evidentiary statement E could provideany new support for H. For conditionalization to come into play, theremust be a change in the probability of the evidence statementE. Where E is old evidence, there is no change inits probability. Some Bayesians who have tried to solve this problem(e.g., Garber) have typically tried to weaken the logical omniscienceassumption to allow for the possibility of discovering logicalrelations (e.g., that H and suitable auxiliary assumptionsimply E). As mentioned above, relaxing the logicalomniscience assumption threatens to block the derivation of almost allof the important results in Bayesian epistemology. Other Bayesians(e.g., Lange) employ the Bayesian formalism as a tool in therational reconstruction of the evidentiary support for ascientific hypothesis, where it is irrelevant to the rationalreconstruction whether the evidence was discovered before or after thetheory was initially formulated. Joyce and Christensen agree thatdiscovering new logical relations between previously accepted evidenceand a theory cannot raise the probability of the theory. However,they suggest that usingPi(H/E) −Pi(H/-E) as a measureof support can at least explain how evidence that has probability onecould still support a theory. Eells and Fitelson have criticized thisproposal and argued that the problem is better addressed bydistinguishing two measures, the historical measure of the degree towhich a piece of evidence E actually confirmed an hypothesisH and the ahistorical measure of how much a piece of evidenceE would support an hypothesis H, on given backgroundinformation B. The second measure enables us to ask theahistorical question of how much E would support Hif we had no other evidence supporting H.
What Is The Theory Of Convergence? - YouTube
There is no foolproof method for determining the convergence or divergence of a series. However, here is a rough guide for tests to try. Given a series , where for all :