Open Access Open Access  Restricted Access Subscription Access

System Learning Method Using Spam Filter

C.A. Kannadasan, R. Jayakumar

Abstract


System learning method can figure out how to perform essential tasks by generalizing from examples. This is often feasible and more effective where human programming is not. As large data becomes available, more ambitious problems can be tackled. As a result, System learning method is broadly used in computer science and other fields. However, developing successful System learning method using spam filter applications requires a substantial amount of “black art” that is hard to find in textbooks. This article summarizes twelve key lessons that System learning method researchers and practitioners have learned. These include pitfalls to avoid using spam filter, important issues to focus on, and answers to common questions.

Cite this Article
C.A. Kannadasan, R. Jayakumar. System Learning Method Using Spam Filter. Journal of Advances in Shell Programming. 2016; 3(1): 19–27p.


Keywords


Boolean vector, objective function, scoring function

Full Text:

PDF

References


Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: Bagging, boosting and variants. System Learning. 1999; 36: 105–142p.

Bengio Y. Learning deep architectures for AI. Foundations and Trends in System Learning. 2009;2: 1–127p.

Benjamini Y, Hochberg Y. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B. 1995; 57: 289–300p.

Bernardo JM, Smith AFM. Bayesian Theory. Wiley, New York, NY. 1994.

Blumer A, Ehrenfeucht A, Haussler D, Warmuth MK. Occam’s razor. Information Processing Letters. 1987; 24: 377–380p.

Cohen WW. Grammatically biased learning: Learning logic programs using an explicit antecedent description language. Artificial Intelligence. 1994;68: 303–366p.

Domingos P. The role of Occam’s razor in knowledge discovery. Data Mining and Knowledge Discovery. 1999; 3: 409–425p.

Domingos P. Bayesian averaging of classifiers and the overfitting problem. Proceedings of the Seventeenth International Conference on Machine Learning. 2000: 223–230p.

Domingos P. A unified bias-variance decomposition and its applications. Proceedings of the Seventeenth International Conference on Machine Learning. 2000: 231–238p.

Domingos P Pazzani M. On the optimality of the simple Bayesian classifier under zero-one loss. Machine Learning. 1997; 29: 103–130p.

Hulten G, Domingos P. Mining complex models from arbitrarily large databases in constant time. Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2002: 525–531p.

Kibler D Langley P. Machine learning as an experimental science. Proceedings of

the Third European Working Session on Learning, London, UK, Pitman. 1988.

Klockars AJ, Sax G. Multiple Comparisons. Sage, Beverly Hills, CA. 1986.


Refbacks

  • There are currently no refbacks.


This site has been shifted to https://stmcomputers.stmjournals.com/