TPRC45 has ended
Back To Schedule
Friday, September 8 • 4:10pm - 4:43pm
Sensitive-by Distance: Quasi-Health Data in the Algorithmic Era

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
“Quantified Self” apps and wearable devices collect and process an enormous amount of “quasi-health” data — information that does not fit within the legal definition of “health data”, but that is otherwise revelatory of individuals’ past, present, and future health statuses, like information about sleep-wake schedule or eating habits).

This article offers a new perspective on the boundaries between health and non-health data: the “data-sensitiveness-by-(computational)-distance” approach — or, more simply, the “sensitive-by-distance” approach. This approach takes into account two variables: the intrinsic sensitiveness (static variable) of personal data and the computational distance (a dynamic variable) between some kinds of personal data and pure health (or sensitive) data, which depends upon the computational capacity available in a given historical period of technological (and scientific) development.

Computational distance should be considered both objectively and subjectively. From an objective perspective, it depends on at least three factors: (1) the level of development of data retrieval technologies at a certain moment; (2) the availability of “accessory data” (personal or non-personal information), and (3) the applicable legal restraints on processing (or re-processing) data. From a subjective perspective, computational capacity depends on the specific data mining efforts (or ability to invest in them) taken by a given data controller: economic resources, human resources, and the utilization of accessory data.

A direct consequence of the expansion of augmented humanity in collecting and inferring personal data is the increasing loss of health data processing “legibility” for data subjects. Consequently, the first challenge to be addressed when searching for a balancing test between individual interests and other (public or commercial) interests is the achievement of a higher level of health data processing legibility, and thereby the empowerment of individuals’ roles in that processing. This is already possible by exploiting existing legal tools to empower data subjects — for instance, by supporting the full exercise of the right to access (i.e. awareness about the finality of processing and the logic involved in automated profiling), the right to data portability, and the right not to be subject to automated profiling.

avatar for Tim Brennan

Tim Brennan

Professor Emeritus, Univ of Maryland Baltimore County


Gianclaudio Malgieri

Vrije Universiteit Brussel

Friday September 8, 2017 4:10pm - 4:43pm EDT
Founders Hall - Auditorium

Attendees (3)