Practical Moral Framework for Everyday Decisions


November 22, 2025

I’ve been thinking about how to define moral action in a way that is both principled and practical. The framework I’ve settled on uses three criteria, all of which should ideally be satisfied at once. However, they don’t carry equal weight. In most cases, the first criterion matters most, then the second, and finally the third:
  1. Does the action serve the common good of humanity and life?
  2. Does it respect the autonomy of everyone involved, and serve the true interests of anyone directly affected?
  3. Does it truly serve my own long-term interests, rather than just feeding my ego or giving me quick dopamine hits?
Because these criteria have unequal weight, real decisions often require tradeoffs. In some situations, the common good matters most; in others, autonomy takes priority; and in still others, especially intimate or extreme cases, my own interests may be the only relevant factor.

Now let me apply this to the dilemma of "watching YouTube".
 My long-term goals involve reading, writing academic papers, working on stories, and generally becoming a better physicist and a better thinker. YouTube, however, is designed to maximize how long users stay on the platform. It offers content that keeps people watching, not necessarily content that helps them grow. The benefit I provide—some revenue to advertisers and creators—is negligible compared to the cost in lost time and scattered attention. From the standpoint of my own values, the platform often gives me little beyond momentary stimulation.
To be fair, using YouTube in a deliberate, controlled way can be perfectly fine. Limiting what I watch and how long I watch preserves my autonomy, and privacy-respecting versions like FreeTube can help with that. The problem is that YouTube’s default design actively undermines this control. Moreover, autoplay and algorithmic recommendations are meant to keep me watching past what I intended. Saying “I can always choose to stop” feels a bit like saying a mildly intoxicated person can “technically” choose to refuse another drink. The option exists in theory, but the environment makes it harder in practice.
It doesn’t help that YouTube often blocks alternative clients like FreeTube, forcing users back into the default ecosystem where these manipulative design choices are strongest. The tools they provide—like timers or the ability to toggle autoplay—feel deliberately flimsy. Because of this, the platform as a whole feels more than mildly unethical: it not only undermines autonomy but also encourages a feedback loop that makes me want to watch more tomorrow simply because I watched today.
Of course, this moral judgment is not universal. Because morality, like everything else, is relative to context. If you’re on a sinking ship in the middle of the ocean with no hope of rescue and five minutes of battery life left, binge all the damn YouTube you want. At that point, the common good is irrelevant, nobody else is affected, and your long-term interests have evaporated into the saltwater breeze. In those final moments, the algorithm can’t hurt you—it can only comfort you on the way down.