Employing Ulysses contracts to dissuade AI peacocking
In Greek mythology, Ulysses has his men tie him down to hear — yet not fall for — the calls of the sirens. The sirens alluringly promised knowledge, wisdom, and pleasure — if only he would stop and divert the direction of his ship. Meanwhile, the other people onboard put wax in their ears and stayed on course despite Ulysses' pleas. This story gets to the heart of a very real commitment device. That is to bind your future self — making it harder or more costly to engage in undesirable behaviour — to overcome temptation or procrastination.
This strategy has been used in fields such as personal finance and health, yet it does not appear to be used in the rollout of AI. Instead, some people are going full bore into temptation, while others are slapping together an AI implementation strategy. Both of which feed into AI peacocking.
AI peacocking refers to organizations that embellish their claims of their AI adoption. Essentially, people are making big public displays that they are on the cutting edge, despite the reality not reflecting this. Using the language mentioned above, they are either tempted to be perceived as leaders in the AI adoption space or are procrastinating on the rollout of these tools. It is important to note that I refer to organizations in this paragraph, but organizations are nothing but the sum of their people.
So, as you — yes, you, reading, even if you are not part of a typical organization — navigate these choppy waters, I hope to offer you potential strategies to create your very own contract. So that you can tie yourself to the mast (e.g., ground yourself) so that you can hear the seductive calls from the sirens yet stay afloat and on course.
The trap of chasing sexy
On January 20th 2025, DeepSeek R1 — a Chinese open-weight (it is not open-source, as not all the training data or processes are fully disclosed) model — was released, sparking widespread panic that the U.S. dominance in AI was potentially under threat. But then came Gemini 3 pro on the 18th of November 2025, which people saw as intensifying the battle for AI supremacy. Then, not too long after that (on the 24th of November 2025), Claude Opus 4.5 was released — and was the new kid on the block with all the attention directed towards it.
By no means is this an exhaustive list (heck, Opus 4.6 was released earlier this month), but it highlights the revolving door of emerging technologies that promise to improve some part of your life. This is great as long as there is deliberate (hence the need for a Ulysses contract) thought before you adopt a new tool.
Why bother?
You may be asking yourself, "Why bother?" It is great that all these new tools are being developed; you want to be at the cutting edge, recognize that you can’t possibly predict the future of these systems, and thus believe you can't set out a specific contract to shape how you adopt them. These are all valid points — but they do not diminish the reality that having some sort of plan before the latest system draws you in is vital.
It is vital because:
AI needs context. AI models have different strengths and weaknesses. For example, you may prefer the way Claude writes but like ChatGPT's reasoning. Using this example as a fictitious workflow, you may decide to have ChatGPT dissect a problem and then copy and paste its response into Claude so it can write a report. By taking only ChatGPT's response and putting it into Claude, you will lose your prompt and potentially all the other details from the other interactions. You can remedy this by copying and pasting the entire chat, including reference material, into the second model — but this may be more of a hassle than it is worth, and you may not always remember to take this step.
The costs are not trivial. Most models charge around $20 a month for their pro tier, but prices can go higher. Now, if you want to capitalize on a few different models that can quickly add up. Sure, you can be made of money and not mind the cost, but this doesn’t mean it is the most efficient approach.
Learning curve. If adopting a new model slows your workflow by 10% while you get used to it, will it net out a positive, or will it just bring you back to baseline? These models are converging towards the mean (e.g., they all have a prompt box and let you pick from a few models), but they have quirks that make them unique. These quirks take a while to learn and, in the meantime, slow down your processes.
Strategies to implement your own contract:
Determine your AI archetype. Do you want/need to be at the cutting edge or should you be a fast follower? This is an important question to ask yourself, as there are pros and cons to both. Cutting-edge means you get access to potentially the latest and greatest — but also more of the duds. You will need to grapple with switching between models, the higher cost, and the learning curves. But you could also be a fast follower. No one seems to want to openly call themselves a fast follower. Yet I believe that is a prudent and, likely, the smartest approach for most people and organizations. You may not have the time and resources to overcome all the hurdles, and the gains from being ahead of everyone may be marginal at best, as you are quick to follow the bleeding-edge adopters.
Ground yourself. This may take the form of a written AI strategy for yourself or just a short checklist on whether to adopt a new tool. Just like the sirens, some people suggest that AI is the solution to every problem — even those that do not exist. They also prey on your fear of falling behind.
A simple rule of thumb I heard from Cal Newport is to think of AI as a smart 21-year-old that can learn if you spend enough time teaching it. Certainly, this is not always the case — some models are better than our fictions 21-year-old, and some are worse. But this is a good rule of thumb for deciding whether AI can solve it.
Another one is to determine the problem before the solution. Ask yourself whether any of your current problems could be solved by AI. If you do not have any major problems at the moment, great! Wait for the need to arise and only then seek out a potential AI solution. If you don't do this, you will likely end up overloading your workflow with systems that don't help you solve anything. For example, you stumble upon a model that can do something that you find extremely cool (e.g., generate a hyperrealistic video). Yet you don't need it, so you try to cram it into your system to potentially enhance it, but it doesn’t improve anything and instead hampers it.
Choose the vital few over the trivial many. The Pareto principle states that roughly 20% of the inputs cause 80% of the outcomes. The same principle can be applied to AI. Maybe incorporating an AI agent into how you do things will unlock that 80%, but I highly doubt 8 agents from 8 different providers will make your life any easier.
These principles may seem overly trite and simplistic. Yet maybe we need to remind ourselves of the tried-and-true rules that have endured over the generations. I want to explicitly state that these are not the definitive rules you should adopt. For example, maybe you love trying out all the models and do not mind the costs. Perfect, keep that up — but be mindful that you want to do this. Do not fall into the trap of thinking you have to do this because you will fall behind, or that you will unlock the most productive system if only you had the right AI model.
I just wanted to offer a few suggestions as you create your own Ulysses contract. I can’t stress it enough; trying to navigate AI without any plan (even a really simple one) will result in needless costs, headaches, and likely peacocking. Instead of getting pulled in every direction, set the course and capitalize on what AI actually has to offer.
Send me a message or drop a comment with some of your rules so we can make a repository of principles together.
Take care,
Emanuel