The police use all sorts of information to fulfil their tasks. Whereas collection and interpretation of information traditionally could only be done by humans, the emergence of ‘Big Data’ creates new opportunities and dilemmas. On the one hand, large amounts of data can be used to train algorithms. This allows them to ‘predict’ offenses such as bicycle theft, burglary, or even serious crimes such as murder and terrorist attacks. On the other hand, highly relevant questions on purpose, effectiveness, and legitimacy of the application of machine learning/‘artificial intelligence’ drown all too often in the ocean of Big Data. This is particularly problematic if such systems are used in the public sector in democracies, where the rule of law applies, and where accountability, as well as the possibility for judicial review, are guaranteed. In this article, we explore the role transparency could play in reconciling these opportunities and dilemmas. While some propose making the systems and data they use themselves transparent, we submit that an open and broad discussion on purpose and objectives should be held during the design process. This might be a more effective way of embedding ethical and legal principles in the technology, and of ensuring legitimacy during application.