Strong Boundaries Make Responsible AI Practices Easier

Rania Bailey
3 min readMay 29, 2024
Image generated by GPT-4o, prompt “an image of an AI implanting itself into a human’s decision-fatigued brain”

No other technology has promised to influence how we think as directly as GPT-class AIs claim to do. Their recent popularity prompts novel questions around how to responsibly integrate their thinking with our own, should we choose to do so. In my opinion, doing this responsibly requires *upholding* the boundary between AI thought and human thought, so that users — rather than AI publishers or the models they build — maintain control over their thought processes.

As an end user, reinforcing this boundary in practice might look like drafting the core content of an idea before sharing any of it with an AI for further refinement. This could be as straightforward as writing an outline before asking an AI to turn it into an essay, or as complex as ideating digital product feature prioritization before asking an AI to help brainstorm on the product’s possible uses. These exercises help establish and reinforce the boundaries between your thinking and the AI’s output so that appropriate safety and responsibility practices can be enacted without unnecessarily slowing down the acceleration using the AI provides.

Reinforcing this boundary between human and AI, rather than aiming to obviate it, contributes to AI safety practices in several ways. A few of those are decision accountability, intuitive practice of human oversight, and…

--

--