https://mindingourway.com/newcomblike-problems-are-the-norm/
What do you do if you’re shy, and you’re going to a job interview where you know they’re more likely to hire confident people, but they don’t hire fake people?
Newcomblike problems occur whenever knowledge about what decision you will make leaks into the environment.
The knowledge doesn't have to be 100% accurate, it just has to be correlated with your eventual actual action (in such a way that if you were going to take a different action, then you would have leaked different information).
When this information is available, and others use it to make their decisions, others put you into a Newcomblike scenario.
All these tools can be fooled, of course. First impressions are often wrong. Con-men often seem trustworthy, and honest shy people can seem unworthy of trust. However, all of this social data is at least correlated with the truth, and that's all we need to give CDT trouble. Remember, CDT assumes that all nodes which are causally disconnected from it are logically disconnected from it: but if someone else gained information that correlates with how you actually are going to act in the future, then your interactions with them may be Newcomblike.
When I worked at Google, I'd occasionally need to convince half a dozen team leads to sign off on a given project. In order to do this, I'd meet with each of them in person and pitch the project slightly differently, according to my model of what parts of the project most appealed to them. I was basing my actions off of how I expected them to make decisions: I was putting them in Newcomblike scenarios.
We constantly leak information about how we make decisions, and others constantly use this information. Human decision situations are Newcomblike by default! It's the non-Newcomblike problems that are simplifications and edge cases.
Information about what we're going to do is frequently leaking into the environment, via unconscious signaling and uncontrolled facial expressions or even just by habit — anyone following a simple routine is likely to act predictably.
My 70% confidence conclusions on this. Newcomb’s paradox doesn’t seem useful as there are no perfect predictors. I think all of this could be explained a lot simpler. I now wonder if frequent use of Newcomb's paradox isn't is a significant way intellectual signaling. I think there is one maybe-sensible and one not-sensible way of labeling problems as newcombslike.
Sometimes “newcomblike” means complex to model. I don’t even think this is helpful. Take the example from the above article. We’re living on a ranch, there is a terrible rain and someone rings to the door asking for help with their broken-down car. A simplified model (like CDT) could describe that however it has encoded causality like: “It’s raining → high probability they actually need help → let them in.” But more complex decision frameworks, or humans in real life, would simply look at many more variables to decide on this.
Causal Decision Theory (CDT) is a decision theory that oversimplifies reality. It operates at a low resolution, ignoring subtle signals — the kind that actually show up in much greater quantity. In newcomb’s paradox CDT would take two boxes and would get $1k instead of $1mln. In the interview example above, the simple CDT framework would act confident but more complex model would also ask: how likely are we to fail at that? Other decision theories such as Evidential Decision Theory just include more signals eg. looks at the evidence from past experiences or includes correlations not only causations like CDT. And what are causal relationships? Just correlations above a certain probability threshold. CDT simplifies the world (which can be useful for programming where you need simple computing solutions). It throws out the harder-to-model stuff, like I the examples above micro-expressions or subtle social cues.
I slightly prefer using the term “newcomblike” problem when it explains a situation involving an agent and a predictor — the conflicting needs between them and the recursive interplay in-between. If we have opposing goals, how should I act, given that I model a predictor, then the predictor models me, then I model the predictor modeling me, and so on? Take the job interview example: how do you act if you know that faking something could get you the job, but if you’re caught faking, you’ll face a worse outcome — you won’t get the job, plus you’ll be made fun of and your reputation will take a hit? I think it does get somehow interesting in the dating example above—after you understand the dynamic you want to be a one-boxer. It’s like being smart in this scenario is a decision, a decision to be one-boxer. That is if you are able then don’t beg or despair — it might shift the situation enough that you actually end up in a relationship.
But even with that example, I’m not sure it fully justifies calling it a “newcomblike” problem. Maybe I’m missing something. Perhaps Newcomb’s paradox is more useful in a practical sense — like when actually agents with some simplified decision theory are interacting in real life.