ROME – “Artificial Intelligence cannot replace the doctor, but the doctor can use this tool to improve his professional skills, optimizing personal knowledge and experience”. The president of the Order of Doctors-Surgeons and Dentists of the province of Rome, Antonio Magi, explains this, speaking about the increasingly widespread use in healthcare of systems linked to artificial intelligence.
“If left to the management of the doctor – he continues – the use of Artificial Intelligence is an extremely positive fact, provided that this occurs through compliance with certain rules and within precise limits. In the administrative part, such as for example the management of appointments and agenda, can certainly be of help. If, however, artificial intelligence decides that to reduce waiting lists it is necessary for visits to last no more than a minute, it is clearly something wrong help healthcare”.
“Therefore – highlights the president of Omceo Rome – chatbots, gpt chats and other machine learning services in the healthcare sector are welcome, as long as they are used by the doctor autonomously. The important thing is that everything is managed by the healthcare professional, especially regarding the methods and timing, however, the doctor is forced to follow the indications of the artificial intelligence, we are faced with a distortion”.
The number one of the Capital’s Omceo says he is extremely concerned by the developments that artificial intelligence is recording in the healthcare sector. “Situations are being created that do not help the doctor at all in his professional activity. I was very struck, for example, by an artificial intelligence platform approved by the last Council of Ministers for general medicine which could even represent a control on the type of prescriptions , effectively taking away from the doctor the professional autonomy essential for the quality of the profession”.
“And the quality of the doctor – specifies Magi – is not only in his decision-making autonomy but also in his experience when he is called upon to decide which treatment and which therapy to administer to the patient. This is why I see with great concern the way in which the ‘Artificial intelligence could direct the doctor and his work.’
Magi then focuses on a medical-legal aspect: “If following a diagnosis the doctor gives a therapy certified by Artificial Intelligence and then it turns out not to be correct, the responsibility always falls on the doctor. And therefore, he must use it or less? We could find ourselves faced with a case of defensive medicine also in artificial intelligence” concludes the president of Omceo Roma.