Trigger-Action Planning — LessWrong
Summary
TAP is the subtle technique to improve your autopilot “The TAP is a sort of pop-up dialog box that says "Hi there! This is a chance to remember that you had a goal to take the stairs more often. Would you like to do anything about that?"”
Why TAP framing is useful? We are TAP machines, sort of similar to the example from a wasp from the text. The sphex wasp mindlessly repeats its egg-laying sequence up to 40 times or more if its paralyzed prey is moved a few inches from the burrow entrance - each time the wasp brings the prey to the burrow, goes inside to check it, comes out, finds the moved prey, and starts over - demonstrating how complex-looking behaviors can actually be simple trigger-action patterns without real cognitive flexibility.
TAP is not gonna fix a deeper internal conflict, conflicted motivation dynamics.
My experience
I loved rereading the idea of TAPs in CFAR book. Initially I read it via The power of habit and Atomic Habits but the framing faded away and I stopped using it actively. I think the definition of TAP was blurry. I didn’t understand really the usefulness of distinguishing of T (triggers) and A (actions) and stopped being proactive at setting Ts in my life. One action can be the next action’s trigger, but one can set up super easy actions as triggers, missing links for the next actions.
October 28, 2024
August 7, 2024
The key to learning might not be about stacking as much new information as possible but about arriving at the right state of mind, a right “wavelength”?. That is, following the flow and feeling the constraint: learning at a pleasant rate, knowing there will be patches of boredom, not rushing, being calm and composed, feeling humble about the huge scope of knowledge I will not ever have.
See more context in [Draft] Spaced repetition – first disappointment, then calibration of my progress agents
August 7, 2024
Richard Ngo said something like this
I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Link
This resonates with me and I slightly transform this into this direction:
Developing [epistemic, collective intelligence and coordination tools in] is more important than I previously thought - basically because the power dynamics around AI [and other new transformative technologies] will become very complicated and messy, in a way that requires [way] more skill to navigate successfully.
I talk about AI safety to a lot of people, from a) experts, the Berkeley crowd working directly on the AI safety problem b) to Informed “new-york-times” reader, c) to people completly new to the subject. When thinking about the audience somewhere between and including b) and c) (referred to BCaudience further) I think the three biggest cruxes are
a) AI will not necessarily become agentic
b) there is no alignment problem that is AI will perform our asks
c) if something goes wrong we always will be able to turn AI off
July 3, 2024
Honestly until recently this was also a crux for me. But a recent read of **Ability to solve long-horizon tasks correlates with wanting things in the behaviorist sense** by Nate Soares untangled my crux, at least a little. Here I attempt to phrase the shortest description that would hopefully loosen this crux for the BC audience see here for the audience definition. The gist of it is:
We naturally want AI to solve increasingly complex tasks. Think: writing a response to a coworker vs. setting up and running a profitable company. 1) As tasks become more complex, they involve more facets of reality that need to be addressed. 2) The more multifaceted the task, the more it will involve elements of reality not yet integrated into established digital processes, requiring AI to connect parts of reality that are currently disconnected or not smooth or difficult to figure out. Therefore, AI needs to be more autonomous, self-running, and agentic to overcome these obstacles and constraints.
The article is short (and also includes some renderings on alignment crux) so I encourage reading it, but if I may surface one metaphor that is the-most-crux-untangling it’s the one about the wrench 🔧.
Because the way to achieve long-horizon targets in a large, unobserved, surprising world that keeps throwing wrenches into one's plans, is probably to become a robust generalist wrench-remover that keeps stubbornly reorienting towards some particular target no matter what wrench reality throws into its plans … so you've built a generalized obstacle-surmounting engine. You've built a thing that excels at noticing when a wrench has been thrown in its plans, and at understanding the wrench, and at removing the wrench or finding some other way to proceed with its plans.
https://www.lesswrong.com/posts/AWoZBzxdm4DoGgiSj/ability-to-solve-long-horizon-tasks-correlates-with-wanting
June 28, 2024 – July 3, 2024
How Attunement + Unconditional Love work — A conversation about Attunement Bootcamp
In this conversation Anita describes an interesting perspective on unconditional love that stayed with me. Listen yourself via this timestamped link https://youtu.be/TNDjemBYahc?si=QNmU7K6forYK1UoY&t=2130 Here’s a reconstruction that I render through me. Within myself, I feel the potential to be both the repulsive criminal and the most desirable saint. At any given moment, I perceive parts of myself as being somewhere on the spectrum between these extremes. Therefore I may experience unconditional love for myself by a) questioning why I would love myself less or more based on the uncontrollable circumstances that place me somewhere on this spectrum and b) by feeling into other potential viable versions of me. In the video Anita describes this thought process and a shift she experienced, feeling strong unconditional love for herself—a love that remains constant regardless of where she is on the spectrum—that is, even if she became a saint, there wouldn't be more of it.
May 25, 2024 – August 1, 2024