Sarah H Raza
2 min readApr 26, 2021

Based on what we’ve learned about the limitations and potential harms of AI in healthcare — much of which has been amplified by COVID-19. However it is often overstated. Such as:

. AI alone can’t predict the spread of new pandemics because there is no database of prior COVID-19 outbreaks as there is for the flu.

. COVID-19 patients can lead to biased estimates that do not accurately represent mortality risk

. Bias in AI models results in skewed estimates across different subgroups.

. In the case of AI for fighting COVID-19, the surveillance issues have been pervasive in countries throughout the world.

. Diversity in data is not practiced.

The tasks AI can perform, exaggerate claims of its effectiveness, neglect the level of human involvement, and fail to consider related risks.

I would like to caution and therefore advise Healthcare AI developers and vendors be subjected to a higher level of scrutiny. This may require involving experts such as:

. Clinical informatics experts

. Operational experts from the inception of product development

. Data Ethics Practitioners

In conclusion, serious consideration must be given to the concept of health equity if we want to progress with all things AI.

AI is a permanent yet an over stated fixture in a healthcare.

No responses yet