I try to discover and distill the most impactful knowledge over a long time.
I am interested in almost anything but organically gravitate towards topics like cognitive biases, epistemology, mental models, collective coordination and decision-making, view quakes, well-being, rationality and effective altruism. I don’t normal write, but I care about being simple and precise. Read more in about.
<aside> 💡 Hello to my wiki! Custom icons mark more extended notes
</aside>
Emotions trough evolutionary psychology
Timeline of historic invention
Value of life years – QALY and DALY
Brain has a window for language learning roughly from 7 to 13 years old
Enlightening perspectives from other cultures
Coordination and epistemic tools
See my blog on Substack where I post essays and updates. A couple of essays to start with:
https://sysiak.substack.com/embed
<aside> 💡 Notes are short-form writings, “processings”, idea drafts that are too short for Substack.
</aside>
Trigger-Action Planning — LessWrong
Summary
TAP is the subtle technique to improve your autopilot “The TAP is a sort of pop-up dialog box that says "Hi there! This is a chance to remember that you had a goal to take the stairs more often. Would you like to do anything about that?"”
Why TAP framing is useful? We are TAP machines, sort of similar to the example from a wasp from the text. The sphex wasp mindlessly repeats its egg-laying sequence up to 40 times or more if its paralyzed prey is moved a few inches from the burrow entrance - each time the wasp brings the prey to the burrow, goes inside to check it, comes out, finds the moved prey, and starts over - demonstrating how complex-looking behaviors can actually be simple trigger-action patterns without real cognitive flexibility.
TAP is not gonna fix a deeper internal conflict, conflicted motivation dynamics.
My experience
I loved rereading the idea of TAPs in CFAR book. Initially I read it via The power of habit and Atomic Habits but the framing faded away and I stopped using it actively. I think the definition of TAP was blurry. I didn’t understand really the usefulness of distinguishing of T (triggers) and A (actions) and stopped being proactive at setting Ts in my life. One action can be the next action’s trigger, but one can set up super easy actions as triggers, missing links for the next actions.
October 28, 2024
August 7, 2024
The key to learning might not be about stacking as much new information as possible but about arriving at the right state of mind, a right “wavelength”?. That is, following the flow and feeling the constraint: learning at a pleasant rate, knowing there will be patches of boredom, not rushing, being calm and composed, feeling humble about the huge scope of knowledge I will not ever have.
See more context in [Draft] Spaced repetition – first disappointment, then calibration of my progress agents
August 7, 2024
Richard Ngo said something like this
I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Link
This resonates with me and I slightly transform this into this direction:
Developing [epistemic, collective intelligence and coordination tools in] is more important than I previously thought - basically because the power dynamics around AI [and other new transformative technologies] will become very complicated and messy, in a way that requires [way] more skill to navigate successfully.
I talk about AI safety to a lot of people, from a) experts, the Berkeley crowd working directly on the AI safety problem b) to Informed “new-york-times” reader, c) to people completly new to the subject. When thinking about the audience somewhere between and including b) and c) (referred to BCaudience further) I think the three biggest cruxes are
a) AI will not necessarily become agentic
b) there is no alignment problem that is AI will perform our asks
c) if something goes wrong we always will be able to turn AI off
July 3, 2024