Effective AI Use at Varying Levels of Expertise

Rania Bailey
5 min readJun 12, 2024

--

Photo by Jackson Films on Unsplash

Introduction

It’s well-established that an AI’s output must be taken “with a grain of salt” due to the risks of hallucination or misinformation. However, exhibiting this skepticism varies in difficulty across different levels of domain expertise. Here’s an approach to getting the most out of AI at different levels of domain knowledge, from expert to absolute beginner.

Expert

Definition

This category describes domains in which you have high expertise and about which you could answer most any question. You literally wrote the book (or could write the book) on the topic. You could use an AI, but it’s as likely to slow down as it is to accelerate your work1. You The AI is an amalgamation of remixed crowd wisdom; if you’re already the expert, the crowd doesn’t have much to offer.

Effective AI Practices

If you wanted to use an AI anyway, you could ask it to critique a work sample according to criteria you determine. The more specific the criteria you provide, the more likely it is that you’ll get something useful from the output. Of course, there’s no guarantee that the critique offered will be useful, but the effort required to ask is likely low.

Fluent

Defined

The next tier of expertise is “fluency”. You know the shape of the landscape, you know who the trusted voices are, and you would be able to spot an error in the midst of new information pretty quickly and confidently. An AI, which is not an authority, can access authoritative sources to synthesize new information, making it a helpful tool for activities or domains in which you have this level of expertise. It can be used to expand that expertise just a bit further with minimal risk because of the context your existing knowledge gives you.

Effective AI Practices

If you know where to direct the AI, you can ask it for information regarding topics of interest and have greater confidence that it will produce valid and reliable outputs than you could if its output was based on training data alone. This is a core concept in RAG systems (2), which are quickly gaining popularity. Asking the LLM to synthesize ideas from known sources, or to highlight key themes, or to identify divergent perspectives within a largely consistent body of work all leverage the LLM’s ability to produce output reflective of linguistic patterns without needing the machine to “understand” what the output describes. This minimizes risks of misinformation and can accelerate analysis tasks. Domains in which you have fluent expertise are a strong candidate for realizing the value of AI tools in your work.

Working Knowledge

Defined

The third tier of expertise is “working knowledge”. It describes knowing enough about something execute on it regularly, intuitively, and implicitly, often without thinking about it, and it’s very likely where most of anyone’s expertise lands. It requires an understanding of core concepts are and their interactions with each other. Lots of LLM models can mimic this ability, but without further explainability, these performances cannot be trusted as conceptual understanding. AI advocates promise that AI can (or soon will be able to) automate, accelerate, or otherwise augment working knowledge, and sometimes this proves true. However, it is not always true, and it can be quite difficult as an end user to tell the difference. This makes working knowledge both the most appealing and the most hazardous domain for AI usage.

Effective AI Practices

It’s most tempting to use AI here because it’s at this tier of expertise where knowing just a little bit more, or gaining just a little bit of speed, promises to yield outsize effects. It’s the opportunities that are already known to be ripe for process improvement and many AI tools offer to do just such a thing. However, this is also the place where the illusion of knowledge AIs create is most dangerous. This is where things like fake court cases, buggy code, and spurious correlations can all very easily be misinterpreted as factual, reliable resources (3, 4, 5). The risk of misinformation is exceptionally high when you have just enough knowledge to (mis)interpret familiarity as accuracy, and not quite enough to spot the difference.

Introductory Knowledge

Defined

I distinguish between introductory knowledge and working knowledge. Introductory knowledge is “just enough to be dangerous”. You’re getting things wrong more often than you’re getting them right and you’re still discovering what resources — authors, thinkers, books, services — can help you learn this skill. It involves making mistakes, getting things wrong, and learning constantly. AIs, programmed to be “helpful and friendly” according to how their creators define those terms, is again both helpful and risky at this level of expertise.

Effective AI Practices

The greatest peril here is that the AI can sound like a valid authority without being a valid authority, and you have no way of telling the difference. For this reason, it’s wise to avoid asking the LLM directly for ideas and insights. The human brain has trouble distinguishing between “plausibly true” and “actually true”, which significantly increases susceptibility to misinformation (6). Worse, even once misinformation is corrected, the initial belief often still persists (7). Since the human brain is wired for easy processing of information, and the LLM is not a reliable source of trustworthy information, it is wise to refrain from using it for direct inquiry in domains whose contours you’re still learning.

That said, the LLM can expand introductory knowledge by helping you navigate the new domain. It can help you shape inquiries, identify useful keywords, and provide (likely incomplete) summaries of available resources. When used this way, as a meta-tool to navigate information rather than to directly inform, the AI offers quite a lot of value with a substantially lessened risk.

Unknowns

Defined

All other domains fall into the ‘unknown’ category. These are things that you’ve never given attention to and would not know how to start navigating. These areas benefit the most from seeking human mentorship or expertise, especially since current LLMs generally require a strong prompt to perform well enough to be useful day-to-day.

Effective AI Practices

It will be especially challenging to prompt the AI effectively in these domains since they are — by definition — the ones you know the least about. This underscores the value of working with a human to develop a basic understanding before integrating AI. Once that baseline knowledge has been established in a trustworthy way, a domain is no longer completely unknown, and the practices associated with introductory knowledge apply.

Conclusion

An LLM-style AI offers different levels of support and risk depending on your current knowledge level in a given domain. The benefits and risks of using an AI at very high or very low levels of expertise are lesser than those found in the middle. An LLM offers the greatest benefits in expanding working or introductory knowledge, but this is precisely the domain in which the greatest care is required to avoid accidentally absorbing plausible misinformation. This middle tiers highlight that it is always crucial to practice AI with care. To learn more about how you can practice thoughtful, effective AI, schedule a consultation.

Footnotes

1 https://www.zdnet.com/article/generative-ai-may-be-creating-more-work-than-it-saves/

2 https://arxiv.org/pdf/2005.11401

3 https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/

4 https://www.theregister.com/2024/05/01/pulumi_ai_pollution_of_search/

5 https://venturebeat.com/business/when-ai-flags-the-ruler-not-the-tumor-and-other-arguments-for-abolishing-the-black-box-vb-live/

6 https://www.pbs.org/newshour/show/believe-read-internet

7 https://cits.ucsb.edu/fake-news/why-we-fall

--

--