Statistics/Exercises

Bayesian Theorem '(example - 2 exercises)

metamong 2022. 3. 24.

Q1) At a certain stage of a criminal investigation, the inspector in charge is 60% convinced of the guilty of a certain suspect.

Suppose now that a new piece of evidence that shows that the criminal has a left-handedness is uncovered.

If 20% of population possesses this characteristic, how certain of the guilt of the suspect should the inspector now be if it turns out that the suspect is among this group?

(how certain of A if B = P(A|B)로 해석)

 

A) according to Bayesian Theorem'...


<Bayesian Theorem 두 가지 가정 충족 확인!>

 

1> <Bayesian 가정1> - 표본공간의 분할 -

 '분할된 원인들(G / Gc)은 상호배반이며 합집합은 전체 표본공간이다'

- P(G) = 'the inspector in charge가 certain suspect가 guilty라 확신할 확률'

- P(Gc) = 'the inspector in charge가 certain suspect가 not guilty라 확신할 확률'

(즉, 해당 용의자가 죄가 있거나 & 있지 않거나 둘 중 하나로 확신해야 한다는 뜻!

가정에 의해, '죄가 있을 것 같기도, 없을 것 같기도 생각하는 건' 없다는 전제를 깔고 들어감)

 

2> <Bayesian 가정2> - 전확률공식 -

→ '원인(G) & 결과(L; Left-handedness)'가 무엇인지 안다면 결과의 확률을 아래와 같이 표현할 수 있다'

"P(L) = P(L∩G) + P(L∩Gc) = P(G)P(L|G) + P(Gc)P(L|Gc)" (사건 G, Gc는 상호배반이며, G∪Gc = S라고 함)


<Bayesian Theorem - 3종류 확률>

P(A)(조건: guilty 여부) → P(B)(결과: 왼손잡이라는 특징을 가지고 있는 지의 여부)

 

1> 사전확률 (조건 P(A))

* P(A) = (①) = 0.6

*(베이시안 가정1에 의해 원인 사건들은 서로 상호배반이므로) P(Ac) = (1-①) = 0.4

 

2> TP(True Positive Rate)

* TP = P(B|A) = (②) = 1 (모든 criminal은 left-handedness라 하였음)

 

3> FP(False Positive Rate)

* FP = P(B|Ac) = (③) = 0.2 (guilty하지 않은 population 중 left-handed 비율) 


<Bayesian 계산>

 

4> P(A|B) = (④) = (P(A∩B)) / (P(B) = P(A){P(A∩B)/P(A)} / {P(Ac)P(B|Ac) + P(A)P(B|A)}

= (0.6*1) / {(0.6*1) + (0.4*0.2)} = 88.23529411764707(%)

*(베이시안 가정2 전확률 공식 적용)


Q2) After that, the new evidence(Bayesian UPDATE) is subject to different possible interpretations, and in fact only shows that it is 90% likely that the criminal possess this characteristic.
In this case how likely would it be that the suspect is guilty?

 

4> (⑤) the new evidence updated ≫ (4>의 결과 posterior probability = P(A) = 0.8823529411764707)

5> TP update) TP = P(B|A) = () = 0.9

6> (4> & 5> 반영하면..!) P(A|B) = (⑦) = (P(A∩B)) / (P(B) = P(A){P(A∩B)/P(A)} / {P(Ac)P(B|Ac) + P(A)P(B|A)}

= 97.12230215827338(%)


<결과>

 

→ after the new evidence has been found, prior Bayesian Probability '88.2%' has been updated to '97.1%'

- this shows that when the inspector finds out the suspect is left-handed, the possibility of making himself feel certain of the guilt of the suspect is about 97.1%, which we can mostly speculate that this suspect is probably guilty

- this case showed us that as new evidence accumulates the suspect will be more likely found to be guilty (but doesn't apply in every experimental cases and in vice versa & as long as we could prove Mathematical Induction in this case. but, no. end now. too far.)


code demonstration (간단)

 

#prior) 사전 확률
#tpr) True Positive Rate
#fpr) False Positive Rate
def Bayesian(prior, tpr, fpr):
    return (prior*tpr) / (prior*tpr + (1-prior)*fpr)

 

Q1.

 

Bayesian(0.6,1,0.2)
#ans) 0.8823529411764707

 

Q2.

 

Bayesian(Bayesian(0.6,1,0.2),0.9,0.2)
#ans) 0.9712230215827338

** Bayesian 이론 관련해서는 추후 포스팅 예정 **

 

 

* 출처) Introduction to Probability and Statistics for Engineers and Scientists, 4th Ed. 

* 썸네일 출처) http://doingbayesiandataanalysis.blogspot.com/2013/12/icons-for-essence-of-bayesian-and.html

댓글