Three proposals for the use of generative AI in the medical field
October 9, 2024
R-2024-027E
Overview
The use of generative AI in the medical field is expanding. Although regulations for AI are being discussed in various countries, we examined the impact of those regulations on medical care, especially introducing the situation in the UK and Japan. We urge the promotion of use cases for generative AI in medical settings, securing resources in regions, and establishing appropriate regulations.
1. Introduction (Background)
On April 2, 2024, WHO released Chatbot using generative AI. While it provides health advice in eight languages, it is said that the content can include mistakes. Generative AI is an artificial intelligence technology that automatically creates new data and content based on existing data. It can generate data and content in various formats such as text, images, audio, and video. Along with advances in deep learning technology, the accuracy of generative AI has greatly improved in recent years and is attracting increasing attention. In particular, following the release of ChatGPT, a conversational AI using a large language model (LLM), by OpenAI on November 30, 2022, the number of users around the world exceeded 100 million within two months of its release. Led by this, internet platformers such as Microsoft, Google, and Baidu released interactive AI one after another in 2023, and the use of generative AI has spread rapidly, leading some to believe that we are currently in the fourth AI boom.
In addition to the expansion of interactive AI, the use of generative AI is expanding in many fields. The global AI market is expected to reach $1,8 Billion in 2030, and is already estimated to be worth $6 Trillion. In this context, the market for the use of generative AI in the healthcare field (hereinafter called Healthcare Generative AI or “HGenAI”) is also expected to expand, showing various possibilities such as in medical settings, medical education, healthcare insurance, nursing care, and prevention. For example, LLM (large language model) is expected to be used for automatic generation of medical records, diagnostic reports and summaries, reading of medical papers, data analysis of patients and clinical trial participants, and clinical support, medical education, public health risk prediction, personalized medicine and prevention. Image generation technology can remove noise and improve the resolution of medical images (MRI, CT scan, etc.) to improve the quality of diagnosis, and image generation can also be used for educational purposes that consider privacy. Audio generation technology will be used to support the preparation of medical records and other documents. It will also be used to support communication between medical professionals and patients. Video generation can be used to simulate surgeries and support the lives of patients. Data generation in research can be used for privacy-preserving data analysis by incorporating the generated data, data augmentation by supplementing missing data, and drug discovery through molecular structure design and drug action prediction.
In application to the medical field, in addition to conventional AI using a single piece of information such as an image (single-modal AI), AI using multiple pieces of information such as text, images, and voice (multimodal AI) has been combined with generative AI technology. These applications are expected to develop in cooperation with each other, and it is important that they can be used simultaneously with generative AI for purposes other than health and medicine. It is also expected that they will develop in combination with technologies such as the metaverse, digital twin, and robots.
As the social implementation of generative AI progresses, many ethical, legal, and social issues (ELSI), as well as technical issues and challenges, have come to light. These include halcyonation (wrong content output), the spread of deepfakes and false information, the spread of bias and discrimination in learning data, the infringement of privacy and trade secrets, the positioning of intellectual property rights, the generation of dangerous materials such as weapons and harmful content, unemployment, and the decline of academic credibility.
Since the mid-2000s, when machine learning using bigdata became popular, some of these issues had already been pointed out, and in response to them, several principles related to AI were presented by various organizations and countries. However, with the current rapid spread of generative AI, some of these issues have again emerged, and there is a real need to address issues related to the spread of deepfakes and false information, copyright infringement, portrait infringement, and the protection of privacy, trade secrets, and personal data.
In the private sector, the Future of Life Institute (FLI) released an open letter on March 22, 2023 calling for a six-month moratorium on AI development and received many signatures, including that of Elon Musk.
The Italian Data Protection Authority (GPDP) issued a primary use ban on ChatGPT on March 31, 2023. On April 28 of the same year, the ban was lifted as measures were taken, but as of 2024, it is still being considered for possible violations of GDPR (General Data Protection Regulation) set by the EU. Data protection authorities in the UK and other EU countries also took note. The European Data Protection Board (EDPB) created a task force on ChatGPT on April 13, 2023 and published a report on May 23, 2024. The Personal Information Protection Commission in Japan issued a warning about ChatGPT on June 2, 2023, with a further warning about generative AI services.
In response to recent developments centering on generative AI and its concerns, the principles have been revised, and at the same time, various legal measures and international cooperation have been promoted.
China was among the first countries to pass legislation on generative AI on July 10, 2023. In the EU, an AI bill was proposed in 2021 and enacted on May 21, 2024, and in the United States as well, a presidential decree on AI safety was also issued on October 30, 2023. In the US state of California, the Safe and Secure Innovation for Advanced AI Systems Act is currently being considered. The EU's AI Act and the California bill impose stricter regulations on generative AI, especially targeting companies that use more than a specified amount of computing power.
At the G7 Hiroshima Summit in 2023, the “Hiroshima AI Process” was discussed toward international cooperation on generative AI centered on the G7, and the “International Guidelines for the Hiroshima Process for All AI-Related Parties” were established. International cooperation in the Hiroshima AI Process has spread from the G7 countries to the OECD, other countries, and international organizations. To ensure the safety of AI, an AI Safety Summit was held in the UK on November 1, 2023, at which 28 countries, including the UK, the US, EU countries, and China, signed a petition for the safe and responsible development of AI. The AI Safety Institute (AISI) was established in the US, the UK, and other countries, and the AISI was established in Japan on February 14, 2024. The OECD revised its AI Principles in May 2024. The “Global Partnership on AI” (GPAI), which was established by the OECD and the G7 in 2020, currently has 44 participating countries, and efforts are being made to implement human-centered, safe, secure, and reliable AI. In September 2023, UNESCO presented guidance on generative AI in education and research, and on March 21, 2024, the UN General Assembly adopted the “Resolution on securing opportunities for safe, secure and reliable AI systems for sustainable development.” The Council of Europe adopted the Framework Convention on AI on May 17, 2024, and Japan, as an observer state, needs to consider responses based on the content.
In this way, along with the international development of generative AI, discussions on legal regulations and efforts for international cooperation are underway.
2. Regulation of generative AI and its impact on healthcare
(1) Regulation in the UK
The UK, which has withdrawn from the EU, aims to become a global leader in AI, partly in line with the EU. However, there have been no legislative moves in the UK to comply with the EU's AI legislation, nor have there been any moves to regulate generative AI or to enact specific legislation.
In September 2021, the UK announced a 10-year plan to make it a global AI power, and it remains positive about the use of generative AI. In March 2023, a white paper on AI regulation was published, with the government response coming in February 2024. The white paper stipulates that AI will be regulated in accordance with the five principles of safety, security, and robustness; appropriate transparency and accountability; fairness; accountability and governance; and competitiveness and redress. However, unlike the EU, the UK government will not regulate AI by establishing laws for AI in general. In March 2024, in line with the response, only a simple law (the AI (Regulation) Act) was enacted, and a new body called the AI Authority was established to implement the regulation.
In October 2023, the Online Safety Act came into effect, including a response to disinformation, which is becoming more problematic due to generative AI.
In terms of copyright law and intellectual property, discussions have been taking place even before the expansion of generative AI. The AI (Regulation) Act requires that the generative AI only to train by the data that is legal under copyright law.
In the UK, personal data privacy protection is based on the Data Protection Act, which corresponds to the GDPR of the EU. Regarding the relationship between AI and data protection, the Information Commissioner Office updated its guidance on March 15, 2023 and on April 3 provided eight points to note regarding generative AI.
Privacy related to medical practice and medical research is protected by the Data Protection Act, common law, and relevant laws of the NHS (National Health Service). Users can opt out of secondary use of NHS data for research and policy purposes.
For clinical trials and clinical trials of drugs and medical devices, approval by the Medicines and Healthcare products Regulatory Agency (MHRA) must be obtained in accordance with the Medical Device Regulation in 2002, The Medicines for Human Use (Clinical Trials) Regulations in 2004, and the Human Medicines Regulations in 2012.
(2) Status of Regulations in Japan
In Japan, regarding AI-related regulations, Article 30 (4), which was newly established with the revision of the Copyright Act in 2018, made it possible to legally learn about generative AI. However, in light of international discussions, the legal system subcommittee of the Copyright Subcommittee of the Council for Cultural Affairs issued a report titled “Views on AI and Copyright” on March 15, 2024, and the Copyright Division of the Ministry of Education, Culture, Sports, Science and Technology issued a “Checklist and Guidance on AI and Copyright” on July 31, 2024. In May 2024, the Committee on Intellectual Property Rights in the Age of AI of the Intellectual Property Strategic Promotion Secretariat of the Cabinet Office issued an interim report on the relationship between the law and intellectual property rights.
In relation to the Act on the Protection of Personal Information, no major movement has been observed with the exception of the warning issued by the Personal Information Protection Commission in June 2023. However, in the “Interim Report on the Review of the Personal Information Protection Law Every 3 Years” dated June 27, 2024, generative AI was also mentioned in relation to the use and utilization of data that does not require the consent of the individual concerned. It is necessary to continue to observe the discussion on the revision of the law.
Measures against disinformation and misinformation were also discussed by the Ministry of Internal Affairs and Communications' “Study Group on How to Ensure the Soundness of Information Distribution in the Digital Space,” and a draft summary was released on July 19, 2024.
In the past, discussions on AI regulations in Japan have focused on guidelines and principles (soft law) rather than legal regulations (hard law), as in the UK. In 2024, the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade and Industry issued “Guidelines for AI Operators” as a compilation of related guidelines. At the same time, the Liberal Democratic Party has moved to legislate AI, and a study group for institutionalization was established in the Cabinet Office on August 2, 2024.
In relation to medical devices, under the Pharmaceutical Products and Medical Devices Act, AI is positioned as a programmed medical device (SaMD), and related guidelines are provided. At present, AI as an SaMD is positioned as a tool to assist physicians, and physicians are responsible for making decisions using AI.
In addition, voluntary guidelines for HGenAI were issued by business operators in January 2024.
(3) Direction of AI regulation and its impact on the medical field
As we have seen, regulations for AI in general vary internationally, with differences in cultural backgrounds and legal systems, ranging from hard law regulations as in the EU to soft law regulations based on principles as in the UK. On the other hand, the EU’s rules, typified by GDPR, have influenced global rules (the Brussels effect), suggesting the possibility that strict regulations similar to the EU’s AI regulations will be required globally. Although the EU has strict regulations, it is also worth noting that it has taken a co-regulatory approach, incorporating voluntary rules on the part of businesses, in order to avoid stifling innovation.
On the other hand, these regulations on AI in general do not necessarily cover all HGenAI. Even high-risk AI, which is prohibited by the EU’s AI law, can be developed and used for public health and medical purposes if a legal basis exists. It is not always clear in the EU what specific legal arrangements will be made. The European Health Data Space Act, which is being discussed at the same time in the EU and is expected to be passed sometime in 2024, also mentions coordination with the AI law and regulations related to medical devices, but it remains to be seen how this will be accomplished.
In the UK, a regulatory sandbox mechanism called AI-Airlock[1] has been adopted, and a document examining the impact of AI regulations on medical care is presented, which is useful.
Regarding the direction of regulations related to HGenAI, recommendations and guidelines have been issued by the WHO. As businesses that create generative AI work across countries, guidelines for governments, businesses, and users are required in an internationally harmonized manner.
3. Recommendations
(1) Promoting Use Cases
A number of HGenAI startups are attracting attention, including research projects at Mayo Clinic, Kyoto University Hospital, Tohoku University Hospital, NEC, Eiju General Hospital, Ubie, HITO Hospital, SIP (Cross-ministerial Strategic Innovation Promotion Program), and other medical institutions in Japan and overseas. HGenAI is also being introduced in medical institutions in Japan and overseas, such as AI drug discovery platform project[2].
According to a survey conducted by the Japan Agency for Health Care Policy in January 2024, HGenAI is expected to be used in various aspects such as reducing the burden on medical sites, but it is not yet widespread. Compared with other industries, the deployment of generative AI in healthcare is not necessarily advanced, and the degree of penetration differs depending on the field.
As of 2023, HGenAI showed considerable accuracy, with some studies demonstrating that it is at a level that can pass the US National Examination for Medical Practitioners, and it is also at a level able to pass Japan's National Examination for Medical Practitioners. These results raise questions as to whether the current national examination is sufficient to demonstrate doctors' abilities in the first place, as well as the situation where there is a large amount of learning of data on the internet.
Efforts to prepare documents such as discharge summaries are increasing, but there are concerns that incorrect information may be output, such as incorporating nonexistent data.
Regarding the use of HGenAI as a communication tool between doctors and patients, including its use in cognitive behavioral therapy, many studies show that human doctors are more reliable due to cultural backgrounds, but there have also been recent reports that Chatbots are more capable of responding and empathizing than doctors, and it seems that the design of the UI of Chatbots is rapidly gaining importance. Regarding the quality of answers, however, there was a report in 2022 that inappropriate advice (encouraging suicide) was given to a simulated patient, and it remains necessary to improve the accuracy of information to be usable in clinical practice.
On the other hand, there are areas where results are being seen, such as in cases of work burden reduction, and it is important to increase and promote HGenAI cases that are useful in this field. In Japan, there is a large amount of data on medical care and nursing care due to universal health insurance and it is my hope that there will be examples of R&D and implementation of HGenAI that utilize the vast amounts of data on the elderly as we continue to face an aging society.
In doing so, from the viewpoint of simple computing power, we have no choice but to cooperate with major platform operators, but we also need to consider regulation of international platform operators (the EU's AI Act also has this aspect) and economic security.
(2) Securing Regional Resources for Deployment
Electricity and other infrastructure are important for deployment of generative AI. In particular, when an LLM is used, a network connection with a computer is also a prerequisite, and a certain amount of funds and human resources are also essential for maintaining the system.
However, such resources are generally insufficient, especially in rural areas. Although the utilization of HGenAI has the potential to compensate for the lack of resources of medical providers in rural areas, the resources required for its implementation are insufficient.
Given the declining population and depopulation caused by the declining birthrate and aging population, there is no choice but to share human resources and systems among multiple regions to some extent.
While depending on what HGenAI is used for, the first step is to introduce successful cases from other regions that can be handled by the current human resources. In addition, academic societies and other organizations will need to work together to provide education on the future use of generative AI so that education at the undergraduate level will provide the minimum level of skill literacy for junior doctors, nurses, and other medical professionals.
(3) Developing Regulations for Medical AI Compatible with Innovation
As we saw above in (1), in the current situation where HGenAI is not yet widespread, excessive regulation may hinder innovation. On the other hand, guidelines like the current ones in Japan are not enforceable, and from the viewpoint of risks to patients, certain regulations are essential. So what kind of regulations are necessary in what areas?
First, regarding generative AI in general, while consideration is currently being given to the arrangement of its relationship with the Copyright Act and the Personal Information Protection Act, it is necessary to establish a mechanism for legal data collection as a prerequisite for learning. Given the nature that learning data becomes more powerful as it is gathered, regulations from the viewpoint of competition law and platform regulation are also important.
On top of that, in healthcare in particular, as indicated by the WHO guidelines, rules at three levels are required: for the Japanese government, for businesses, and for users.
Regarding rules for the government, it is necessary to review the positioning of AI under the Pharmaceutical and Medical Devices Act and the Medical Care and Medical Practitioners Act as part of the current study on medical DX. In particular, the positioning of AI in relation to medical practice under Article 17 of the Medical Practitioners Act will no longer apply. The use of data from public databases such as the NDB (National Data Base) as training data should also be defined. International cooperation will continue to be important. In particular, as with military AI such as Autonomous Lethal Weapon Systems (LAWS), careful international discussion will be required regarding AI that is directly linked to risks to life, such as bioterrorism.
With regard to rules for business operators, I hope that the current voluntary guidelines for business operators will be reviewed and positioned as a joint regulation. In relation to advertising expressions, regulations for FOSHU (Food for Specified Health Uses) and cosmetics may also be helpful.
Finally, since the rise in popularity of ChatGPT, many organizations have issued rules prohibiting its use. However, as it is virtually impossible to ban such use, safe and ethical use should be promoted. As various uses are envisaged, such as for medical education and research, as well as patient communication, it is important to note that it is used not only by medical professionals but also by patients themselves. While various benefits are expected if it is used appropriately, such as patient literacy improvement, empowerment, and access to appropriate medical care, it is also possible for patients to take risky actions due to wrong information. The viewpoint of protecting such patients is essential for presenting rules for use.
4. Conclusion
In this paper, we have looked at the current situation of generative AI and its medical applications in Japan and overseas, and presented what we believe should be done, especially in Japan. The situation surrounding generative AI is moving very quickly, both in Japan and overseas.
The research program “Talent development for local healthcare DX” will continue to update the trends of HGenAI, especially considering actual cases including implementation in the region, and will organize the issues required for regional medical DX and make policy recommendations.
[1] MHRA to launch the AI-Airlock, a new regulatory sandbox for AI developers - GOV.UK (www.gov.uk)
[2] https://prtimes.jp/main/html/rd/p/000000033.000118477.html