9 – SL NB 08 S Bayesian Learning 2 V1 V6

So let’s see. We have three spam emails and one of them contains the word ‘easy,’ which means the probability of an email containing the word ‘easy’ given that it’s spam is one-third. Since two out of the three three spam emails containing the word ‘money,’ then the probability of an email containing the word money given that it’s spam is two-thirds. And similarly, since there are five ham emails and one of them contains the word ‘easy,’ then the probability of an email containing the word ‘easy’ given that it is ham is one-fifth. And same thing for the word ‘money.’ And the main gist of Bayesian learning is the following, we go from what’s known, which is P of ‘easy’ given spam and P of ‘money’ given spam, to what’s inferred, which is P of spam given that it contains the word ‘easy,’ which is one-half, since there are two emails containing the word ‘easy’ and only one of them is spam. And P of spam given that it contains the word ‘money,’ which is two-thirds since there are three emails containing the word ‘money’ and two of them are spam.

%d 블로거가 이것을 좋아합니다: