Talk:Prosecutor's fallacy

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
WikiProject Philosophy (Rated C-class, Mid-importance)
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
WikiProject Statistics (Rated C-class, Low-importance)
WikiProject icon

This article is within the scope of the WikiProject Statistics, a collaborative effort to improve the coverage of statistics on Wikipedia. If you would like to participate, please visit the project page or join the discussion.

C-Class article C  This article has been rated as C-Class on the quality scale.
 Low  This article has been rated as Low-importance on the importance scale.

About Introductions in Every Wikipedia Article[edit]

Mathematicians and scientists are desperate to be complete, correct, and precise. That's a good thing. But this is a very bad thing for the introductory sentences of an article. The result is that we get introduction bloat. This problem has grown in Wikipedia over the years. I see this in page after page.

Introductions must be short and simple summaries. I think a rule of thumb would be a max limit of 3 sentences with 10 words for each sentence. I know this idea is intimidating. It has been for me. But now when I try to read other people's articles, I just get confused.

I'm working on a computer science thesis for Bayes Rule. I really want to modify the intro paragraph for this article to make it simple. I would much rather that someone else started swinging the scythe. It is a thankless task that can easily make enemies. But I appeal to all you my colleagues. Think of the poor high school kid seeing this for the first time...or the poor lawyer who can't do math...or the dumb politician who only has 30 seconds...or the poor thesis writer who wants to know the difference between the general idea and the exceptions, boundary conditions, contrarian opinions, idoesyncracies, latest developments, anticipations, implications, bullet lists, subordinate concepts, and everything else that doesn't go in an introduction. I'm thinking that maybe the brevity of a dictionary definition is about right for an introduction.

A really really simple math-free example with provocatively relevant teeth should be right after the introduction. After that, the triple integrals can really fly.

My apologies in advance if I've offended. But please, let us work together. Let's try to remember rhetorical architecture. Wiki will be better for it. --ClickStudent (talk) 16:49, 23 May 2008 (UTC) good point the intro is still to complexCinnamon colbert (talk) 19:48, 20 July 2008 (UTC)

original article on the so called Prosecutor's fallacy[edit]

perhaps someone could more clearly explain this:

Argument from rarity – Consider this case: a lottery winner is accused of cheating, based on the improbability of winning. At the trial, the prosecutor calculates the (very small) probability of winning the lottery without cheating and argues that this is the chance of innocence. The logical flaw: the prosecutor has failed to account for the low prior probability of winning in the first place.

I am usually good at math and probability stuff, but the above has me stumped. Sorry if this makes me sound stupid It says that the prosecutor failed to account for the low prior probability of winning in the first place. But its says that the prosecutor calculates the very small probability of winning. It seems contradictory. —Preceding unsigned comment added by (talk) 00:04, 30 January 2011 (UTC)

From the original article on the so called Prosecutor's fallacy:

Consider for instance the case of Sally Clark, who was accused in 1998 of having killed her first child at 11 weeks of age, then conceived another child and killed it at 8 weeks of age. The prosecution had an expert witness testify that the probability of two children dying from sudden infant death syndrome is about 1 in 73 million. To provide proper context for this number, the probability of a mother killing one child, conceiving another and killing that one too, should have been estimated and compared to the 1 in 73 million figure, but it wasn't. Ms. Clark was convicted in 1999, resulting in a press release by the Royal Statistical Society which pointed out the mistake (

The reason the case against Clark was flawed has nothing to do with the Prosecutor's fallacy, or the misapplication of statistics; though I agree that the way in which statistics were used here was simply wrong.

The case against Clark was flawed because it rests on a false dichotomy: Either Sally Clark's children both died from sudden infant death syndrome or she killed them.

While it's true that SIDS and infanticide are contrary explanations, showing one to be false is insufficient by itself to establish the truth of the alternative.

True, but nobody, not even the defense, introduced any other explanations; essentially everyone agreed that those two were the only possible explanations in that case. That should definitely be stated in the article though. AxelBoldt 20:28, 28 Jul 2004 (UTC)
One issue...SIDS and Infanticide still don't work together in a simple binomial distribution. They're not the only possibilities, despite the lack of any others being brought up. By a simple True/False logic, there is a 100% chance that either one of these possibilities is true. We see Cromwell's rule, that assuming the probability of 1 in a set of observations that could most definitely be expanded upon, we ignore all outside data not covered. Now, to get to the point, I quote a previous user; "While it's true that SIDS and infanticide are contrary explanations, showing one to be false is insufficient by itself to establish the truth of the alternative." That's true. If the prosecuter argued on this fallacy, his claim of a 1 in 73 million chance of the children dying of SIDS would naturally have to make the incredible assumption that there is a 72,999,999 in 73 million chance that the children were killed by her. That is, of course, if he stuck to the simple binomial idea of one idea being true, then the other must be false. I'm sure we can agree that that claim is absoutely ridiculous by a statistical standpoint. The original entry was correct in assuming that the probability of the murder should have been calculated and compared to this probability. As far as I can tell, the entry is correct as it stands.

Perhaps the assumption that these were the only two possibilities should be introduced into the article, as user AxelBoldt stated. Yes, it's a very likely assumption, and one that would be correct. I would see that as the reason that the statistics were misused; a simple misconception of the false assumption of a binomial distribution. It's said that statistics lie, and they can be warped very easily.

I removed:

Of course some consider the Prosecutor's fallacy no fallacy at all, because a logical fallacy, by Wikipedia's own definition, is any way in which an argument fails to be valid or sound. Based as it is upon probability, the Prosecutor's fallacy is not a deductive argument to begin with, and therefore questions of its validity or soundness are themselves variants of the fallacy of stolen concept.

I don't think anyone, not even prosecutors or Bayesians, consider the reasoning behind the Prosecutor's fallacy to be sound. Furthermore, reasoning about probabilities can very well be fallacious, like the Gambler's fallacy for example.

I also cut the remainder of the article significantly, pointing out that some Bayesians may find it justified to use an a priori probability of 1/2, which is what the Prosecutor's fallacy implicitly assumes. AxelBoldt 18:58 Oct 13, 2002 (UTC)

I don't see how the part of the article that says

One formulation of Bayes' theorem then states:
Odds(G|E) = Odds(G) · P(E|G)/P(E|~G)

is justified. Using Bayes' theorem to find the probablility of guilt gives:

Odds(G|E) = Odds(G) · P(E|G) / P(E)

where due to mutual exclusion:

P(E) = P(E|~G) · P(~G) + P(E|G) · P(G)

The article's formula holds only when you make the additional, unstated assumption that P(G) is small enough to make P(E|~G) a good approximation for P(E).

That's not incorrect; the formula Odds(G|E) = Odds(G) * P(E|G)/P(E|~G) is correct, without any unstated assumptions. To prove the formula, divide Pr(G|E)=Pr(E|G)*Pr(G)/Pr(E) by Pr(~G|E)=Pr(E|~G)*Pr(~G)/Pr(E). AxelBoldt 20:28, 28 Jul 2004 (UTC)
The article itself is biased and has a prosecurtor's fallacy in it. In the section labeled "Defense attorney's fallacy" a Wikipedia contributing editor (or several) indicate that the defense attorneys for OJ Simpson twisted the truth. However, the editors choose to ignore the times when the prosecutor's of the case twisted the truth/facts/statistics -- (talk) 19:34, 19 March 2016 (UTC)

Does this happen everywhere int the World or limited to in U.S.? -- Taku

No, it has occurred in cases in the UK -- Tony Vignaux 09:00 18 May 2003 (UTC)

I'm not sure that the Clark example illustrates the Prosecutor's Fallacy very well, because I think there's another closely related fallacy. I think I understand the Bayesian analysis, but I think there's a perhaps more obvious error than not computing the prior Odds(G): the calculation of Odds(G|E) doesn't take into account that any family that has 2 children who die of SIDS could find themselves in the same situation as Clark, and there are probably millions of such families.

Suppose 50% of people are guilty and we believe that the probability of two children in the same household dying of SIDS are 1 in 73 million. We still shouldn't be convinced that Clark is guilty. Rather, if there are say 10 million households with two children, and any time two children die there's a police inquiry because it's suspicious when 2 children in the same household die, then the odds of an unfortunate 2-time SIDS mother coming before the court is approximately 1/7.

BTW, the fallacy of the other example given in this article seems more related to the number of people tested than to failing to take into account the a priori probability.

Here's a an example of not taking into account the a priori prob, that has no problem with the number of people tested: suppose I believe that I have magical abilities that allow me to win the lottery, even though the odds of winning with random numbers are 1 in 1 billion. I play and win the lottery. Are you convinced I have magical abilities? I'd say it's certainly worth checking out, but anyone's reasonable estimate of the a priori odds that I have magical powers is very very low. So, it's more likely that I was simply very lucky than that I have magical powers. Note that this example doesn't require any argument about how many other people declare that they have magical powers and play the lottery -- I could be the only one, and my claim would still be dubious.

What are people's thoughts on my reasoning here? If I'm sound, does anyone know which is the prosecutor's fallacy: failing to account for the a priori probability, or failing to account for the number of tests done?

Zashaw 05:21 24 May 2003 (UTC)

I don't exactly follow your alternative analysis (and I am sure there aren't "millions of families" with 2 SIDS deaths). Prosecutor's fallacy, to put it succinctly, is the assumption that tiny Pr(~G|E) yields similarly tiny Pr(~G). I think there are various possible reasons that can cause a person to commit this fallacy. AxelBoldt 20:28, 28 Jul 2004 (UTC)

In another scenario, assume a rape has been committed, and all the males of the town are rounded up for DNA testing. Finally one man whose DNA matches is arrested. At the trial, it is testified that the probability of finding a DNA match is only 1 in 10,000. This does not mean that the suspect is innocent with the tiny probability of 1 in 10,000. If for instance 20,000 men were tested, then we would expect to find two matches, and the suspect is innocent with probability at least 1 in 2.

This was confusing. If all the men in the town have been tested, and there's only 1 match, then that's your guy. So I replaced the above with a version that didn't say all the men had been tested. Does it look OK? Evercat 13:30 14 Jul 2003 (UTC)

It's unfortunate that people change text that they simply don't understand. The statement is that 1 in 10,000 people will match a given DNA sample. Given 20,000 people, 2 are likely to match -- they can't both be guilty. That means that DNA testing is not 100% accurate, thus if there's only one match, that does not prove that it's your guy.

I found the resulting paragraph a bit confusing, because at first it wasn't stated who was tested, other than the indicted man. So, I changed it to just say that 20,000 men were tested, & didn't say how this related to the size of the town. BTW, though, the example is valid even if everyone in the town is tested, since it's not clear that the rapist necessarily lived in that town -- he could easily have come from a neighboring town.
I also didn't see how the probability 1 in 2 was arrived at, at the end of the pragraph, so I did the calculation I think more accurately.
Zashaw 22:58 19 Jul 2003 (UTC)

spurious argument[edit]

I removed the following paragraph from this article: Another instance of the prosecutor's fallacy is sometimes encountered when discussing the origins of life: the probability of life arising at random out of the physical laws is estimated to be tiny, and this is presented as evidence for a creator, without regard for the possibility that the probability of such a creator could be even tinier. There is absolutely no way to establish the probability of there being a creator. One person might say the probability is zero. Another might say that it's 1. A few may put it somewhere in between. But the question isn't really one of probability at all. :-Rholton 02:40, 3 Dec 2004 (UTC)

That's not a valid reason to remove the paragraph. The described argument takes the probability of there being a creator as the inverse of the probability of life arising at random out of physical laws. Within that framework, the argument is an example of the prosecutor's fallacy. Whether that inverse relationship really holds or whether its actually possible to calculate the probability of life arising at random out of physical laws is beside the point. 11:50, 13 March 2006 (UTC)

Strange statement[edit]

This seems to be incorrect, unless I'm missing something:

When the photographic evidence is combined with the match, the two together point strongly towards guilt, since (assuming the chance of being in the photograph and having the match are independent) the chance that the accused is innocent falls to about 0.01%.

It seems that the probability of innocence only falls to 89.1%, since 0.9 x 0.99 = 0.891. Even with the untenable independence assumption (for example, someone may be framing them by setting up several pieces of evidence), it would take quite a bit more evidence to show anything beyond reasonable doubt. Deco 07:16, 17 Feb 2005 (UTC)

You are indeed missing something and falling into the defendant's fallacy. The argument goes that the prior probability that the man is innocent is 9,999,999/10,000,000. While the likelihood of having the match and being in the video may be 1 if guilty, the likelihood of the match if innocent is 1/1,000,000, and the likelihood of being in the video if innocent is 1/100,000, so (assuming independence since this is mathematics, not real life) the likelihood of both happening if innocent is 1/100,000,000,000. That gives a posterior probability of being innocent of 9,999,999/100,009,999,999 which is 0.000099989991... or about 0.01%. --Henrygb 03:11, 12 Mar 2005 (UTC)

Thanks, I get it now - I forgot we were dealing here with conditional probabilities, not just prior probabilities. This result fits with common sense, too — we often consider 2 pieces of strong evidence conclusive. Deco 03:30, 12 Mar 2005 (UTC)

Who originated the terms "prosecutor's fallacy" and "defense fallacy." Shouldn't there be a reference to the origin of these terms?

I'm sick and tired of the phrase "a concrete example".

--- Interesting case? Sounds more like tragic to me! —Preceding unsigned comment added by (talk) 09:23, 11 September 2007 (UTC)

Right answer to the wrong question[edit]

This is rather minor, doesn't take anything away from the article, but in the interests of accuracy...

At the top, the article mentions a 1-in-a-million chance in a community with 10 million people, saying that on average 10 people will match. Then later it says that this means that if someone matches it only means a 1/10 chance of guilt. However this is not the case - if we know someone matches it changes the expected number of people that will match. There is one person (the defendant) who we know matches, and there are 9,999,999 people who each also have a 1-in-a-million chance of matching, fro which we can expect about 10 people to match. This gives the defendant a 1/11 chance of guilt. Of course this figure changes again if numerous tests were performed and only the defendant matched - it's 1 in (1 + one millionth of the untested population).

Though I don't want to add this straight into the article, I want to get some confirmation I'm not completely wrong in my calculations here... most notably that I'm not falling on the wrong side of the trap "Expected number given this particular person matches" vs "Expected number given at least one person matches" - I think I have it right though. Phlip 06:49, 6 October 2005 (UTC)

A surprising example[edit]

A friend of mine sent me this example. It may deserve a paragraph here or in the Bayes theorem article. Bill Jefferys 21:36, 16 December 2005 (UTC)

Possibly erroneous statement[edit]

"When the photographic evidence is combined with the match, the two together point strongly towards guilt, since (assuming the chance of being in the photograph and having the match are independent) the chance that the accused is innocent falls to about 0.01%. This low probability of innocence is not proof of guilt." - the derivation of 0.01% is unclear. This could be wrong.

I said the same thing. See above. Perhaps this should be clarified in the article. Deco 18:52, 22 December 2005 (UTC)

Another possibly erroneous statement[edit]

In legal terms, the prosecutor is operating in terms of a presumption of guilt, something which is contrary to the normal presumption of innocence where a person is assumed to be innocent unless found guilty. A more reasonable value for the prior odds of guilt might be a value estimated from the overall frequency of the given crime in the general population.

In the UK at least, procecutors are oblidged to presume guilt, reguardless of evidence to the contrary.

Defendants are oblidged to presume innocence, reguardless of evidence to the contrary (unless their client pleads guilty).

jurors are oblidged to initially presume innocence, and change their mind according to the cases presented, but only return a guilty verdict if they are sure 'beyond reasonable doubt' of the defendant's guilt.

And, finally, expert witnessess are oblidged to initially presume nothing -- neither innocence nor guilt -- and approach the situation unbiasedly.

So, prosecutors assuming guilt is not contrary to the normal presumption of innocence ;)

I took it out. Figured the edit needed explainin --DakAD 15:42, 7 June 2006 (UTC)

Can you provide evidence to support your statement that UK Prosecutors are required to presume guilt please? ---*- u:Chazz/contact/t: 02:26, 26 February 2007 (UTC)
This is profoundly wrong, DakAD might like to read the Crown Prosecution Service's "Statement of Ethical Principles" to be found at - in particular Section 4 which states both that prosecutors "have a duty to the court in question to act with independence in the interest of justice" and "remain impartial and objective". (talk) 23:09, 23 February 2010 (UTC)

... and later died from alcohol abuse[edit]

Where does the fact that Sally Clark died of alcohol abuse come from? Not from the article linked... (talk) 11:04, 26 August 2008 (UTC)

It also seems completely irrelevant to the subject of the article. (talk) 13:07, 9 July 2009 (UTC)

Could it be used to show the implications of using the prosecutors fallacy, by showing how much it can damage an innocent person? —Preceding unsigned comment added by (talk) 02:39, 11 March 2011 (UTC)

I have added a citation regarding the cause of death of Sally Clark. I do not see how the effect on her of a wrongful conviction for killing her two babies can in any way be irrelevant to the main subject. The case arose largely because a forensic pathologist behaved incompetently to the level of 'serious professional misconduct' and withheld exculpatory evidence; he then compounded this by revisiting his analysis of the cause of death of the first child and decided that it wasn't by natural causes but by smothering. Meadow's misuse of statistics, and failure to make clear to the court that his evidence was outside his field of expertise, contributed greatly to her wrongful conviction. Meadow later tried to minimise the effect of his erroneous and inexpert use of statistics; unfortunately significant harm had been done to a number of innocent defendants - see his page on Wiki. In Sally Clark's case he contributed to the destruction of her life. Limhey (talk) 08:56, 17 October 2015 (UTC)

Defendant's fallacy[edit]

This whole section is completely fallacious, since a lot of its contents is falsely based on the gambler's fallacy. The section goes on a very long and incomprehensible description of why should the defendant be almost definitely guilty and I had to read it three times to understand it, although my knowledge of the probability theory goes quite far, since I studied mathematics and I play poker for a living. So, as far as I have understood, the test is 99.9999% correct, but there is only one person out of 10 million who is actually guilty. So, in fact, the expected value of guilty people is 1*99.9999%, or almost one person and the expected value of false positives is 99,999,999*0.0001%, or about 10 and, according to the Bayes' theorem, defendant's reasoning is actually correct, he just used a shortcut to get there.

The section goes on to add some video, where (IANAL, I just use common sense) using it to show guilt is mathematically nonsensical. Also, it assumes a probability of 0.1% of the two events having happened (where, also, assuming their mutual independence changes the odds by a very large degree), which is utterly false, as it revolves around the same idea that a coin having come tails 10 times in a row is now only 2-10 to come tails on the next throw. I suggest this section to be removed or at least completely rewritten, as it is full of false information that contradicts the rest of this article. (talk) 17:32, 11 September 2009 (UTC)

I don't see that. The article says "the likelihood of both happening if innocent is 1/100,000,000,000" so the combination of tests does not lead to your 99.9999% accuracy but 99.999999999%, and so the combination of tests leads to a very much smaller expected false positve number. --Rumping (talk) 08:07, 27 January 2010 (UTC)

Um, how is this relevant to the defendant's fallacy? And one other thing to consider regarding the so-called gambler's fallacy: it is only a fallacy if one assumes a frequentist perspective or if one endorses a tautology. In order to establish conditional independence, which is necessary to butress a claim of fallaciousness, one must observe numerous trials. But suppose a person only views a few trials, and all come out a certain way (say heads, for e.g.)a reasonable person might very well believe the next outcome would not be random: a bayesian would think the next outcome should be a heads because his prior for heads is quite high. The point is that one cannot establish conditional independence without the endorsing the frequentist paradigm, and without conditional independece, it is by defenition NOT improper to use the previous outcome to modify the current posterior probability estimate. —Preceding unsigned comment added by (talk) 21:05, 25 June 2010 (UTC)

Mathematical analysis thought experiment is fallacious[edit]

The thought experiment involving balls in a bowl is fallacious (ironic since it supposed to be exposing a fallacy). Drawing a white ball actually does give you good reason to believe the ball is wood, since it is (Bayesian) evidence that there are not a large majority of red (plastic) balls in the bowl. Anyone who wants to revert this edit, please provide source/reference for this thought experiment, which is currently un-attributed. Joncolvin (talk) 05:40, 2 February 2010 (UTC)

Defense attorney's fallacy[edit]

This section seems to be a re-worked Defendant's fallacy. I should point out I'm not familiar with this fallacy. However, this appears to me to be a very different argument. The Defendant's fallacy (seen above in this talk page) seems to address the problem of dismissing two separate items of evidence as weak, when they can be combined and as such are quite strong. The article now seems to suggest that by conducting a test of 10 million people with a probability of a positive of 1 in a million given innocense, it is false to assume that a positive result is only indicative of a 10% chance of guilt. Leaving aside the question of whether the 10 million people are sure to contain the guilty party (a question raised above), it seems to me that this evidence alone does only give approximately a 1 in 10 (or 11) chance of guilt. The earlier article coupled a similar test with a further test to obtain an increased probability. This chapter seems an example of the fallacy cause by multiple testing, mentioned above. While a probability of 1 in 10 (or 11) may actually be significant with other evidence, as the citation seems to suggest, it hardly seems a fallacy to produce this sort of probability without further supporting evidence.

Probability is an area I have some knowledge in, but I'm in no way specialized in this area. I don't assume that I haven't missed something, and so I'd rather someone corrected me here than I mistakenly corrected the article. Timbo76 (talk) 05:49, 11 January 2011 (UTC)

I have added a sentence to indicate how to interpret the probability that a group of suspects might not include the guilty person. —DIV ( (talk) 04:30, 5 October 2019 (UTC))

The Sally Clark case[edit]

The article says the one of two possibilities must be true: Both died from SIDS or Both were murdered. What about the probability of one dying from SIDS and one murdered, or deaths from other causes? —Preceding unsigned comment added by (talk) 02:40, 11 March 2011 (UTC)

These issues are tackled in: Hill, R. (2004). Multiple sudden infant deaths – coincidence or beyond coincidence? Paediatric and Perinatal Epidemiology, 18, 320-326. (talk) 06:41, 2 June 2011 (UTC)
No, this is a problem with the article. It shouldn't say there are only two possibilities, since obviously there is a third possibility: one child dies of SIDS and the other is murdered. Perhaps it should be reworded, such as "If we consider the following two possibilities..." (talk) 02:41, 3 September 2012 (UTC)
It's also (and ironically) a problem with the RSS press release, which read: "Two deaths by SIDS or two murders are each quite unlikely, but one has apparently happened in this case." However, there is a plausible argument (which doesn't appear to be given explicitly by Hill) why the one-of-each hypothesis can be neglected. If a single death is much more likely to be SIDS than murder (Hill gives this ratio as 17:1, and we would see many more murder suspects if it were not so), then SIDS-SIDS is more likely than SIDS-murder (by the same ratio) even if double SIDS deaths are uncorrelated (and a positive correlation would enhance the ratio; a negative correlation would be needed to reverse it -- or other relevant information about a given case, of course). I've neglected to consider the factor of 2 associated with SIDS-murder vs. murder-SIDS, but 2 is much less than 17. — Preceding unsigned comment added by (talk) 14:41, 4 October 2012 (UTC)
The preceding paragraph is based on the assumption that the probability of murder (or SIDS) is independent of prior SIDS (or murder), which would also have to be tested. But it seems likely that the ordering is indeed as given, i.e. that (in the absence of other evidence) double-SIDS is more likely than one-of-each. (Of course, if other evidence indicates that murder is more likely than SIDS in one of the two deaths, then it may well be that the entire ordering is reversed: SIDS becomes less likely than murder in the other death too, and hence double-SIDS becomes the least likely explanation.) — Preceding unsigned comment added by (talk) 14:52, 4 October 2012 (UTC)

The article says that even if the one in 73 million statistic was correct, then it still might be unlikely due to the other possibilities all having low base rates. This is incorrect. It already said that the historical rate for two cases of SIDS is much higher than that. The probability of that being a coincidence is far too small to consider. Clearly, the probability of two apparent SIDS cases is much higher than one in 73 million, so one of the alternatives must have a high base rate. — DanielLC 18:40, 22 March 2014 (UTC)


The wording of a lede should not be as difficult to follow for the ordinary reader as this; it's far too technical. I very much doubt anyone without real knowledge of statistics will be enlightened by it, many will be put off reading further. The detailed knowledge that Melcombe obviously has should be used to simplify the lede, even if that makes it less complete. The full technical explainations should be moved to the main body of the article, which is the proper place to go into them. Overagainst (talk) 21:19, 11 June 2012 (UTC)

The lede is unintelligible and I have an expertise is this area.CSDarrow (talk) 21:31, 16 April 2013 (UTC)

Clarity in writing[edit]

I have a few college degrees and I'm still not sure I understand what the prosecutor's fallacy is, even after reading this article. It seems like a common problem with logic-related articles on Wikipedia.

Can the Editors working on this piece pretend that the average reader a) knows nothing about the philosophical field of logic, b) knows little about the processes of law and statistics and c) can not decipher these cryptic equations? The average person who comes across this article (probably because another article links to it) does not have a background in logic or philosophy. Do not write for those who already knows what a prosecutor's fallacy is, write for those who don't even know what a "fallacy" is. That's the best contribution you could make. Liz Read! Talk! 14:41, 10 November 2013 (UTC)

External links modified[edit]

Hello fellow Wikipedians,

I have just added archive links to one external link on Prosecutor's fallacy. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true to let others know.

As of February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{sourcecheck}} (last update: 15 July 2018).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 17:27, 27 February 2016 (UTC)

Possible typo in the "Defense attorney's fallacy" section[edit]

There appears to be a typo in the intro of this section. It lists a change from 10% to 10%, but that's the same number both times. I am not familiar enough with the subject to know what the correct wording should be. Here is the sentence I am talking about:

"If, for example, the police came up with a list of 10 suspects, all of whom had access to the crime scene, then it would be very illogical indeed to suggest that a test that offers a one-in-a-million chance of a match would change the defendant's prior probability from 1 in 10 or 10 percent to 1 in 10 or 10 percent." (talk) 16:47, 31 October 2018 (UTC)