Discussion about this post

User's avatar
J. Goard's avatar

I agree with the majority of your supporting views here, and yet profoundly disagree with the concept of "offsetting", which I believe ignores an ontological foundation of consequentialism.

The world we interact with presents us with tradeoffs. A chess player may "sacrifice a pawn for development", a patient undergoes the suffering, lost happiness and other pluralist harms from chemotherapy in order to avoid greater expected harms from cancer, and so forth. These tradeoffs -- these connections between groupings of consequences -- are features of the choice space presented to the agent. A defining feature of those connections we call "tradeoffs" is that *the fact of the connection is itself a bad thing*. If the chess player were able to achieve equivalent development while also keeping the pawn, this would be better, but the constraints of the game don't present this option. If doctors could cure cancer just as reliably with no chemo (or anything else as harmful), this would be an amazing improvement.

The behaviors that we're discussing as "offsetting" seem to be to be inventing fictive constraints that do not exist prior in the decision space, but *would make things worse if they did*. The tradeoff *given* the fictive constraint may very well be net positive, but the invention of the fictive constraint itself is negative. It's true that if a chess player were to purposefully decide to play a terrible, easily losing opening in one game and then play brilliant winning chess in eleven other games, the overall result would be good. But this is not the same phenomenon as a sacrifice of material for position within a game. In the latter case, the pre-existent rules constrain the agent's decision space; in the former, they faced no such constraint until they forged one. Similarly, being cured of a deadly cancer and then later -- while completely healthy -- being given a useless round of chemo, would probably be net positive. Nevertheless, if such a thing were done under a fictive tradeoff, then the invention of such a tradeoff would have been extremely bad.

I fully agree, it's quite obviously true that most vegans are capable of donating an amount of money and/or time that's much more valuable to the goal of animal agriculture abolition and broader sentientist consequentialism than the total effects of their personally abstaining from consuming animal products. You and I are in full agreement that we should talk much more about the larger-impact choices relative to the smaller ones. But where there's no tradeoff between the two in the decision space with which the world presents us, inventing a fictive tradeoff is a major meta-badness in itself. Why the fuck would a consequentialist ever want to introduce such a corrupt piece of code into their program?

Expand full comment
Woarna's avatar

Seems incorrect.

Expand full comment
4 more comments...

No posts