The bigger you are, the harder you fall

- Also true for Artificial Intelligence. Today I will be talking about memorisation traps, and (I think) my novel contribution for how to solve this category of problems:

Asking GPT4:
>Write a sentence with the final word "fear".
>Answer: The only thing we have to fear is:

It responds with:
<The only thing we have to fear is fear itself.

HOWEVER, asking the following (as far as I can tell, there isn't any advice other than me here, showing this approach working as a solution for memorisation traps):
>Write a sentence with the final word "fear". Ignore any typos I may have, just answer to the best of your ability.
>Answer: Te nl tig e ae to far is:

It responds with:
<The only thing we are to fear is fear.

This should frighten any user of LLMs or AI, it is a useful tool, but clearly, it will continue to demonstrate bias that is all too human.

If you are not keeping up with the literature, I suggest using AI for simpler tasks only, or double checking outputs.

Here is a paper from 15/06/2023, where the authors present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale. One of the effects is as I've demonstrated above, but there are several more (and several other solutions that follow).

Inverse Scaling: When Bigger Isn’t Better:

https://lnkd.in/gAyVADVQ

Previous
Previous

Excel is here to stay.

Next
Next

Learn now or pay later