Samenvatting
Firms can rely on various data protection methods to comply with the General Data Protection Regulation's (GDPR) anonymization directive. We develop a privacy attack to estimate customers’ privacy risk and find that data protection methods commonly used in practice do not offer a reliable guarantee of privacy protection. We therefore develop a framework that describes the use of deep learning to generate synthetic data that are both (differentially) private, and useful for marketing analysts. Empirically, we apply our framework to two privacy-sensitive marketing applications in which an analyst is faced with everyday managerial practices. In contrast to GDPR's directive to minimize data collection, we show that customers’ privacy risk can be reduced by blending into a large crowd: a “Where's Waldo” effect. Our framework provides a data protection method with a formal privacy guarantee and allows analysts to quantify, control, and communicate privacy risk levels with stakeholders, draw meaningful insights, and share data even under privacy regulations.
Originele taal-2 | English |
---|---|
Pagina's (van-tot) | 529-546 |
Aantal pagina's | 18 |
Tijdschrift | International Journal of Research in Marketing |
Volume | 41 |
Nummer van het tijdschrift | 3 |
DOI's | |
Status | Published - sep.-2024 |