top of page
Search
depbacabehump

Data Discretization And Concept Hierarchy Generalization Ppt: A Tutorial with Exercises



Data discretization refers to a method of converting a huge number of data values into smaller ones so that the evaluation and management of data become easy. In other words, data discretization is a method of converting attributes values of continuous data into a finite set of intervals with minimum data loss. There are two forms of data discretization first is supervised discretization, and the second is unsupervised discretization. Supervised discretization refers to a method in which the class data is used. Unsupervised discretization refers to a method depending upon the way which operation proceeds. It means it works on the top-down splitting strategy and bottom-up merging strategy.


Binning refers to a data smoothing technique that helps to group a huge number of continuous values into smaller values. For data discretization and the development of idea hierarchy, this technique can also be used.




Data Discretization And Concept Hierarchy Generalization Ppt



The term hierarchy represents an organizational structure or mapping in which items are ranked according to their levels of importance. In other words, we can say that a hierarchy concept refers to a sequence of mappings with a set of more general concepts to complex concepts. It means mapping is done from low-level concepts to high-level concepts. For example, in computer science, there are different types of hierarchical systems. A document is placed in a folder in windows at a specific place in the tree structure is the best example of a computer hierarchical tree model. There are two types of hierarchy: top-down mapping and the second one is bottom-up mapping.


Data discretization is a method of converting attributes values of continuous data into a finite set of intervals with minimum data loss. In contrast, data binarization is used to transform the continuous and discrete attributes into binary attributes.


As we know, an infinite of degrees of freedom mathematical problem poses with the continuous data. For many purposes, data scientists need the implementation of discretization. It is also used to improve signal noise ratio.


5. Discretization & Concept Hierarchy Operation: Techniques of data discretization are used to divide the attributes of the continuous nature into data with intervals. We replace many constant values of the attributes by labels of small intervals. This means that mining results are shown in a concise, and easily understandable way.


Concept Hierarchies: It reduces the data size by collecting and then replacing the low-level concepts (such as 43 for age) with high-level concepts (categorical variables such as middle age or Senior).


5. Generalization: It converts low-level data attributes to high-level data attributes using concept hierarchy. For Example Age initially in Numerical form (22, 25) is converted into categorical value (young, old). For example, Categorical attributes, such as house addresses, may be generalized to higher-level definitions, such as town or country.


The data cleaning concept composes and overlaps other preprocessing steps that will be further discussed, like detecting and handling noise and handling missing values. To gain deeper explanations, I will explain these other steps exclusively.


Data transformation: This is a preprocessing step that converts data from one format or structure to another to help find patterns or to improve the efficiency of the various data mining models. Most data mining algorithms require the data to be parsed in specific formats to work efficiently. Data transformation provides the original data with an alternative representation. Subtasks of data transformation are feature engineering, data aggregation (summarization), data normalization, discretization, and generalization. These subtasks will be independently segregated and exclusively explained. The transformation processes are known as data wrangling or data mugging. Most of these transformational processes can be simple or complex based on the required changes to transform and map data from one(source) format to another (target).


n the previous section the various preprocessing tools were highlighted, Python is used to explain how data preprocessing can be carried out. Python is used because of its simplicity and readability, so beginners can understand the various concepts and focus less on the programming syntax. Also, Python is an exhaustive open source library that covers data analysis, data visualization, statistics, machine learning and deep learning.


Biography: Sharon C. Glotzer is the John W. Cahn Distinguished University Professor at the University of Michigan, Ann Arbor, the Stuart W. Churchill Collegiate Professor of Chemical Engineering, and the Anthony C. Lembke Department Chair of Chemical Engineering. She is also Professor of Materials Science and Engineering, Physics, Applied Physics, and Macromolecular Science and Engineering. Her research on computational assembly science and engineering aims toward predictive materials design of colloidal and soft matter: using computation, geometrical concepts, and statistical mechanics, her research group seeks to understand complex behavior emerging from simple rules and forces, and use that knowledge to design new classes of materials. Glotzer's group also develops and disseminates powerful open-source software including the particle simulation toolkit, HOOMD-blue, which allows for fast molecular simulation of materials on graphics processors, the signac framework for data and workflow management, and several analysis and visualization tools. Glotzer received her B.S. in Physics from UCLA and her PhD in Physics from Boston University. She is a member of the National Academy of Sciences, the National Academy of Engineering and the American Academy of Arts and Sciences.


In this paper, we consider the task of ranking individuals based on the potential benefit of being "treated" (e.g. by a drug or exposure to recommendations or ads), referred to as Uplift Modeling in the literature. This application has gained a surge of interest in recent years and it is found in many applications such as personalized medicine, recommender systems or targeted advertising. In real life scenarios the capacity of models to rank individuals by potential benefit is measured by the Area Under the Uplift Curve (AUUC), a ranking metric related to the well known Area Under ROC Curve. In the case where the objective function, for learning model parameters, is different from AUUC, the capacity of the resulting system to generalize on AUUC is limited. To tackle this issue, we propose to learn a model that directly optimizes an upper bound on AUUC. To find such a model we first develop a generalization bound on AUUC and then derive from it a learning objective called AUUC-max, usable with linear and deep models. We empirically study the tightness of this generalization bound, its effectiveness for hyperparameters tuning and show the efficiency of the proposed learning objective compared to a wide range of competitive baselines on two classical uplift modeling benchmarks using real-world datasets.


Recent works have shown that Generative Adversarial Networks (GANs) may generalize poorly and thus are vulnerable to privacy attacks. In this paper, we seek to improve the generalization of GANs from a perspective of privacy protection, specifically in terms of defending against the membership inference attack (MIA) which aims to infer whether a particular sample was used for model training. We design a GAN framework, partition GAN (PAR-GAN), which consists of one generator and multiple discriminators trained over disjoint partitions of the training data. The key idea of PAR-GAN is to reduce the generalization gap by approximating a mixture distribution of all partitions of the training data. Our theoretical analysis shows that PAR-GAN can achieve global optimality just like the original GAN. Our experimental results on simulated data and multiple popular datasets demonstrate that PAR-GAN can improve the generalization of GANs while mitigating information leakage induced by MIA.


Gaussian Process (GP) offers a principled non-parametric framework for learning stochastic functions. The generalization capability of GPs depends heavily on the kernel function, which implicitly imposes the smoothness assumptions of the data. However, common feature-based kernel functions are inefficient to model the relational data, where the smoothness assumptions implied by the kernels are violated. To model the complex and non-differentiable functions over relational data, we propose a novel Graph Convolutional Kernel, which enables to incorporate relational structures to feature-based kernels to capture the statistical structure of data. To validate the effectiveness of proposed kernel function in modeling relational data, we introduce GP models with Graph Convolutional Kernel in two relational learning settings, i.e., unsupervised settings of link prediction and semi-supervised settings of object classification. The parameters of our GP models are optimized through the scalable variational inducing point method. However, the highly structured likelihood objective requires densely sampling from variational distributions, which is costly and makes its optimization challenging in the unsupervised settings. To tackle this challenge, we propose a Local Neighbor Sampling technique with a provable more efficient computational complexity. Experimental results on real-world datasets demonstrate that our model achieves state-of-the-art performance in two relational learning tasks.


In this paper, we take a different approach and propose to use graph coarsening for scalable training of GNNs, which is generic, extremely simple and has sublinear memory and time costs during training. We present extensive theoretical analysis on the effect of using coarsening operations and provides useful guidance on the choice of coarsening methods. Interestingly, our theoretical analysis shows that coarsening can also be considered as a type of regularization and may improve the generalization. Finally, empirical results on real world datasets show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy. 2ff7e9595c


1 view0 comments

Recent Posts

See All

Comments


bottom of page