Safety and Limitations of GenAI

Critical Limitations of GenAI

As great as AI is, it has critical limitations that every user must understand. Here are the five key limitations and how to mitigate them:

1. Hallucination

Cause: Probabilistic prediction / pattern filling

Example: AI invents a non-existent historical event or research paper

Mitigation: Fact-check claims with reliable sources; be wary of overly specific details without citation; ask for sources (but verify them)

2. Bias

Cause: Learned from biased training data

Example: AI generates stereotyped descriptions of professions or groups

Mitigation: Be aware of potential biases; seek diverse perspectives; critically examine assumptions in the output

3. Outdated Information

Cause: Static training data / knowledge cutoff

Example: AI provides incorrect information about current leaders or recent events

Mitigation: Check the model’s knowledge cutoff date; verify time-sensitive information with current sources

4. Overconfidence

Cause: Statistical likelihood ≠ factual accuracy

Example: AI confidently presents incorrect medical advice or financial data

Mitigation: Do not rely solely on AI for critical decisions; treat outputs with skepticism regardless of tone; verify independently

5. Lack of Reasoning

Cause: Pattern matching vs. true understanding/logic

Example: AI fails a simple logic puzzle it hasn’t seen a pattern for

Mitigation: Use AI for tasks matching its strengths (language generation, summarization) but not for complex reasoning or novel problem-solving

Video content coming soon – placeholder for video embed