Legal Issues with Ai in Healthcare
We anticipate that as AI in healthcare continues to be used and evolve, so will this list of legal considerations. Pharmaceutical companies will most likely use AI systems to expand their patent portfolio for traditional medicines [121], [123]. However, artificial intelligence systems could also be used by competitors or patent examiners to predict incremental innovations or to detect that a patent is not patentable, for example due to a lack of novelty or inventive step [121], [123]. In addition, trade secret law combined with technological protections and contracts can protect complex algorithms as well as datasets, information and correlations generated by AI systems [121]. These questions are particularly difficult to answer when AI operates with «black box» algorithms that can result from uninterpretable machine learning techniques that are very difficult for clinicians to fully understand ([49]; [3], p. 727). For example, Corti`s algorithms are a «black box» because even the inventor of Corti doesn`t know how the software makes its decisions to alert emergency dispatchers when someone has a cardiac arrest. This lack of knowledge may be of concern to health professionals [46]. For example, how should a clinician disclose that they cannot fully interpret AI diagnostic or treatment recommendations? What degree of transparency is needed? What does this relate to the «right to comment» under the EU GDPR (explained in more detail in section 4.3.2)? What about cases where the patient objects to the use of certain categories of data (e.g.
genetic data and family history) is reluctant to authorize? How can we balance patient privacy with the safety and effectiveness of AI? Medical innovations offer a lot of hope. However, similar innovations in governance (law, policy, ethics) are likely necessary if society is to realize the fruits of medical innovation and avoid its pitfalls. As innovations in artificial intelligence (AI) advance rapidly, scientists from various disciplines are voicing concerns about health-related AI that will likely require legal responses to ensure the necessary balance. These scientific insights can provide critical information on the most pressing challenges that will help shape and drive future regulatory reforms. However, to our knowledge, there is no comprehensive literature review examining the legal concerns surrounding health-related AI. Our goal is therefore to summarize and map the literature that examines legal concerns related to health-related AI using a scoping approach. It is also likely that companies responsible for the GDPR will need to conduct a data protection impact assessment for new AI-based technologies to be used in the clinical field. In general, Article 35(1) of the GDPR requires such a pre-processing assessment for «new technologies» where the processing is «likely to result in a high risk to the rights and freedoms of natural persons». 35 para. 3 The GDPR expressly specifies when a data protection impact assessment is required, in particular, in cases of `a systematic and comprehensive assessment of personal aspects based on automated processing, including profiling, on which decisions producing legal effects against him or her or similarly significantly affecting the natural person are based` or `the processing of special categories of personal data to large scale» ( B. genetic and health data). Recital 91 of the GDPR specifies that personal data should not be «taken into account on a large scale when the processing concerns personal data of patients (…) by a single physician.
Art. 35 para. 7 The GDPR contains a list of what the assessment must at least include, such as a description of the planned processing operations, an assessment of the risks to the freedoms and rights of data subjects and the measures envisaged to manage the risks. A number of reports from different jurisdictions have identified areas where legislative reform may be required to address health-related AI issues ([4, 7,8,9,10,11,12,13]).