Data Verification Task
(Redirected from data verification)
Jump to navigation
Jump to search
A Data Verification Task is a data process that identifies data inconsistencies after a data migration task.
- See: Data Quality, Proofreading, Data Consistency, Data Migration, Data Transfer, Data Loss, Double Entry.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/data_verification Retrieved:2018-11-27.
- Data verification is a process in which different types of data are checked for accuracy and inconsistencies after data migration is done. [1]
It helps to determine whether data was accurately translated when data is transferred from one source to another, is complete, and supports processes in the new system. During verification, there may be a need for a parallel run of both systems to identify areas of disparity and forestall erroneous data loss.
A type of Data Verification is double entry and proofreading data. Proofreading data involves someone checking the data entered against the original document. This is also time consuming and costly.
- Data verification is a process in which different types of data are checked for accuracy and inconsistencies after data migration is done. [1]
2015
- (Gisdakis et al., 2015) ⇒ Stylianos Gisdakis, Thanassis Giannetsos, and Panos Papadimitratos. (2015). “SHIELD: A Data Verification Framework for Participatory Sensing Systems.” In: Proceedings of the 8th ACM Conference on Security & Privacy in Wireless and Mobile Networks.
- ABSTRACT: The openness of PS systems renders them vulnerable to malicious users that can pollute the measurement collection process, in an attempt to degrade the PS system data and, overall, its usefulness. Mitigating such adversarial behavior is hard. Cryptographic protection, authentication, authorization, and access control can help but they do not fully address the problem. Reports from faulty insiders (participants with credentials) can target the process intelligently, forcing the PS system to deviate from the actual sensed phenomenon. Filtering out those faulty reports is challenging, with practically no prior knowledge on the participants' trustworthiness, dynamically changing phenomena, and possibly large numbers of compromised devices. This paper proposes SHIELD, a novel data verification framework for PS systems that can complement any security architecture. SHIELD handles available, contradicting evidence, classifies efficiently incoming reports, and effectively separates and rejects those that are faulty. As a result, the deemed correct data can accurately represent the sensed phenomena, even when 45% of the reports are faulty, intelligently selected by coordinated adversaries and targeted optimally across the system's coverage area.