by Emily Curwood
The constant
evolution of Medicine and its treatments and approaches often bring improvement
and possibilities, but also debate and ethical concerns. In this article I will
be exploring two ethical dilemmas surrounding Medical advances that are often
discussed in the 21st century.
Euthanasia is a topic that is often debated in the Medical world and has led to divided views, evidenced by the discrepancies between laws of different countries. The definition of Euthanasia is “the practice of intentionally ending a life to relieve pain and suffering”. In the UK euthanasia is regarded as either manslaughter or murder, with a maximum penalty of life imprisonment. A main issue with euthanasia is the inability to give consent, known as non-voluntary Euthanasia, which is illegal in all countries. This is often the case when the person is in a coma, is too young or has brain damage, but they have expressed their wish for their life to be ended in these circumstances. While this may be the wish of the person, the decision is ultimately decided by another person, sometimes made by the patient’s doctor, who may have little understanding of the patient themselves, only the condition they have. The motivation for euthanasia is to relieve pain and suffering however these people have the right to high quality end of life (palliative) care which is often seen as a more justifiable action as it treats the patient as a person, not as a set of symptoms, making euthanasia unnecessary. But there are many arguments for euthanasia such as the right of freedom of choice and the right to die. In addition, the issues or discrepancies regarding euthanasia can be minimised with proper regulation that is possible. Although, it may be difficult to regulate sometimes that is so individualistic to the patient and their scenario - regulations may set a constant and inflexible precedent for a wide variety of cases.
Another ethical debate that has surfaced with the evolution
of Medical technology is Artificial Intelligence. The use of AI to
complete tasks normally carried out by doctors such as routine operations or
diagnosing diseases brings multiple questions to the forefront such as who is
to blame if an error occurs, the manufacturer or the hospital? The use of AI
adds greater uncertainty when it comes to the structure of healthcare not just
in the presence of mistakes or errors but also when training. The constant
evolution of the abilities of AI means that skills a Medical student may have
spent years learning and development may not be necessary by the time they
become a doctor. In addition, what is often discussed is the bias that AI may
have due to their algorithms which may have serious negative impacts on
diagnosing patients. The fact that these algorithms for AI in Medicine are newly
developed means it is difficult to predict and understand the biases against
gender or wealth that they may contain or develop over time. This is
particularly difficult as the only way to improve this technology is to
evaluate how it performs in a hospital environment and putting underdeveloped
technology in a situation where the wellbeing of people is on the line is far
too risky. Although, these drawbacks may be countered by using AI outside of personal
interactions, like laboratories, where AI would still be contributing to
healthcare by analysing blood samples or DNA for mutations. But the same
question over whether AI is adequately equipped to deal with situations like
diagnosing where the results are so impactful is something only time can tell.
Sources:
http://www.bbc.co.uk/ethics/euthanasia/infavour/infavour_1.shtml
http://www.bbc.co.uk/ethics/euthanasia/against/against_1.shtml
https://www.nhs.uk/conditions/euthanasia-and-assisted-suicide/
https://www.hippocraticpost.com/medico-legal/top-5-ethical-issues-in-medicine/
Comments
Post a Comment
Comments with names are more likely to be published.