Research Article Open Access

Customized Privacy Preservation Using Unknowns to Stymie Unearthing Of Association Rules

J. Indumathi1 and G.V. Uma2
  • 1 ,
  • 2 , Afganistan
Journal of Computer Science
Volume 3 No. 11, 2007, 874-881


Submitted On: 25 November 2007 Published On: 30 November 2007

How to Cite: Indumathi, J. & Uma, G. (2007). Customized Privacy Preservation Using Unknowns to Stymie Unearthing Of Association Rules . Journal of Computer Science, 3(11), 874-881.


The explosions of new data mining techniques has augmented privacy risks because now it is probable to powerfully coalesce and cross-examine massive data stores, accessible on the web, in the rummage around of earlier unidentified hidden patterns. Consecutively to make a overtly accessible system safe and sound, we must guarantee not only that private sensitive data have been trimmed out, but also to make certain that certain inference channels have been clogged-up. The data and the concealed knowledge in this data should be made secure. Furthermore, the requirement for making our system as open as probable - to the extent that data sensitivity is not jeopardized - asks for diverse techniques that account for the revelation organize of sensitive data. At its nucleus, the value of privacy preserving data mining is plagiaristic not only from its knack to haul out imperative knowledge, but also from its resiliency to molestation. It performs well at needed levels during times of both crisis and normal operations. This task force's central thrust is towards establishing a earth with robust data security, where knowledge users persist to profit from data without compromising the data privacy.The goal of privacy-preserving data mining is to liberate a dataset that researchers can study without being able to identify sensitive information about any individuals in the data (with high probability). One technique for privacy-preserving data mining is to replace the sensitive items by unknown values. For many situations it is safer if the sanitization process consign unknown values as a substitute of fake values. This obscures the susceptible rules, whilst defending the punter of the data commencing false rules. In this study, we modify the blocking algorithms of[1] by proposing a new heuristic in order to reduce the information loss. We put forward an enhanced approach that overcomes the privacy breach problem of existing blocking approaches. Though they have argued that the rules are truly safe from an attack by an adversary, they have not formally proved the safety, which we have proved. We have investigated how probabilistic and information theoretic techniques can be applied to this problem. More complete analysis of the effectiveness of these rule obscuring techniques, and formal study of the problem has been made. Our preliminary domino effect point toward deterministic algorithms for privacy preserving association rules shows potential framework for controlling disclosure of sensitive data and knowledge.

  • 4 Citations



  • Adversary
  • confidence
  • data sanitization
  • disclosure control
  • inference problem
  • machine learning
  • network
  • repository
  • support