Member-only story
Strong Boundaries Make Responsible AI Practices Easier
No other technology has promised to influence how we think as directly as GPT-class AIs claim to do. Their recent popularity prompts novel questions around how to responsibly integrate their thinking with our own, should we choose to do so. In my opinion, doing this responsibly requires *upholding* the boundary between AI thought and human thought, so that users — rather than AI publishers or the models they build — maintain control over their thought processes.
As an end user, reinforcing this boundary in practice might look like drafting the core content of an idea before sharing any of it with an AI for further refinement. This could be as straightforward as writing an outline before asking an AI to turn it into an essay, or as complex as ideating digital product feature prioritization before asking an AI to help brainstorm on the product’s possible uses. These exercises help establish and reinforce the boundaries between your thinking and the AI’s output so that appropriate safety and responsibility practices can be enacted without unnecessarily slowing down the acceleration using the AI provides.
Reinforcing this boundary between human and AI, rather than aiming to obviate it, contributes to AI safety practices in several ways. A few of those are decision accountability, intuitive practice of human oversight, and bolstering users’ self-concept.
Reinforcing the boundary between an end user’s thought-work and the output generated by an AI increases clarity with regards to accountability. If it’s easy to tell what was done by the AI, and what was done by the human, it becomes much more possible to determine where responsibility lies for any decisions made. Already, courtrooms are finding that the human involved is culpable for AI-encouraged actions; ensuring that the experience of using an AI reflects this responsibility can help users think carefully about the actions they take while under the influence of an AI. Holding this boundary firmly can help the user recognize when that influence diverges from their best interest, and update their plan accordingly.
Reinforcing this boundary between an AI’s thinking and a human’s thinking or choice to act also serves to decrease the friction involved in human oversight of AI usage. We’re inclined to trust our…