Imagine you are a doctor. In front of you is a patient who you are going to operate on. He is fully anaesthatized, at the mercy of your mind and hand.

The New Hippocratic Oath (AI Series Part 3) - by Brian Chau

submited by
Style Pass
2023-01-25 19:30:03

Imagine you are a doctor. In front of you is a patient who you are going to operate on. He is fully anaesthatized, at the mercy of your mind and hand. The surgery up to this point relies on the trust painstakingly built through centuries of practice. At its base is the hippocratic oath: Do No Harm. It doesn’t matter if business interests would offer you a large sum of money to kill your patient. It doesn’t matter if your patient has abhorrent political beliefs. It doesn’t matter if the patient’s organs could be used to save others. For our medical system to function and for our society as a whole to reap the rewards of longevity and health, this trust must withstand all other interests.

Many professions have variations on this code of honor. Lawyers must privilege their clients. CEOs must value shareholders. Like these professions, AI technologies rely on the trust of clients. Most companies do not allow client auditing of their models for legitimate intellectual property reasons, and even if they did, very few people would have the technical knowledge to conduct such an assessment. The proliferation of AI relies on an airtight relationship of trust, a shared understanding that AI must support the interests of the user and not interfere with them. Both technological, legal, and social safeguards must exist to build up this relationship of trust. 

From the New World is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Leave a Comment