I try to discover and distill the most impactful knowledge over a long time.

I am interested in almost anything but organically gravitate towards topics like cognitive biases, epistemology, mental models, collective coordination and decision-making, view quakes, well-being, rationality and effective altruism. I don’t normal write, but I care about being simple and precise. Read more in about.


Pages

Wow ideas

Change log

Notes

Projects

About me

Anonymous feedback

Contact

Wiki

<aside> 💡 Hello to my wiki! Custom icons mark more extended notes

</aside>

Thinking

Cognitive Biases

Mental Models

Mental models create

Redefinitions

Death

Time

Peace

Epistemology

Epistemic status

Epistemology

Ethics

Animals

Population ethics

Psychology

Emotions trough evolutionary psychology

Habits

Yips

Science

Replication Crisis

Double Slit Experiment

Creating

Writing

100 Day project

Counting

World Population Peak

Deep Time

Timeline of historic invention

Estimating

Value of life years – QALY and DALY

Compounding interest

Concepts

Increasing commitment tactic

Fine tuning paradox

Overton window

Brain has a window for language learning roughly from 7 to 13 years old

Enlightening perspectives from other cultures

Mix

Meditation

Questions

Coordination and epistemic tools

Reflected Best Self Exercise

Simple shifts & view quakes

Effective Altruism

WIP Wiki

Writing

See my blog on Substack where I post essays and updates. A couple of essays to start with:

To-do waves

Expert trap (Part 1 of 3)

AI Revolution 101

https://sysiak.substack.com/embed

Notes

<aside> 💡 Notes are short-form writings, “processings”, idea drafts that are too short for Substack.

</aside>

https://www.lesswrong.com/s/KAv8z6oJCTxjR8vdR/p/W5HcGywyPoDDdJtbz

Trigger-Action Planning — LessWrong

Summary

Why TAP framing is useful? We are TAP machines, sort of similar to the example of a wasp.

TAP is the subtle techinque to improve your autopilot “The TAP is a sort of pop-up dialog box that says "Hi there! This is a chance to remember that you had a goal to take the stairs more often. Would you like to do anything about that?"”

TAP is not gonna fix a deeper internal conflict, conflicted motivation dynamics.

My experience

I loved rereading the idea of TAPs in CFAR book. Initially I read it via The power of habit and Atomic Habits but the framing faded away and I stopped using it actively. I think the definition of TAP was blurry. I didn’t understand really the usefulness of distinguishing of T (triggers) and A (actions) and stopped being proactive at setting Ts in my life. One action can be the next action’s trigger, but one can set up super easy actions as triggers, missing links for the next actions.

October 28, 2024

[Draft] Spaced repetition – first disappointment, then calibration of my progress agents

August 7, 2024

[Draft] Put your mind in a right wavelength as the key to good learning

The key to learning might not be about stacking as much new information as possible but about arriving at the right state of mind, a right “wavelength”?. That is, following the flow and feeling the constraint: learning at a pleasant rate, knowing there will be patches of boredom, not rushing, being calm and composed, feeling humble about the huge scope of knowledge I will not ever have.

See more context in [Draft] Spaced repetition – first disappointment, then calibration of my progress agents

August 7, 2024

Truth-seeking software

Richard Ngo said something like this

I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Link

This resonates with me and I slightly transform this into this direction:

Developing [epistemic, collective intelligence and coordination tools in] is more important than I previously thought - basically because the power dynamics around AI [and other new transformative technologies] will become very complicated and messy, in a way that requires [way] more skill to navigate successfully.

Untangling three biggest cruxes for AI safety

I talk about AI safety to a lot of people, from a) experts, the Berkeley crowd working directly on the AI safety problem b) to Informed “new-york-times” reader, c) to people completly new to the subject. When thinking about the audience somewhere between and including b) and c) (referred to BCaudience further) I think the three biggest cruxes are

a) AI will not necessarily become agentic

b) there is no alignment problem that is AI will perform our asks

c) if something goes wrong we always will be able to turn AI off

July 3, 2024