I agree with the majority of your supporting views here, and yet profoundly disagree with the concept of "offsetting", which I believe ignores an ontological foundation of consequentialism.
The world we interact with presents us with tradeoffs. A chess player may "sacrifice a pawn for development", a patient undergoes the suffering, lost happiness and other pluralist harms from chemotherapy in order to avoid greater expected harms from cancer, and so forth. These tradeoffs -- these connections between groupings of consequences -- are features of the choice space presented to the agent. A defining feature of those connections we call "tradeoffs" is that *the fact of the connection is itself a bad thing*. If the chess player were able to achieve equivalent development while also keeping the pawn, this would be better, but the constraints of the game don't present this option. If doctors could cure cancer just as reliably with no chemo (or anything else as harmful), this would be an amazing improvement.
The behaviors that we're discussing as "offsetting" seem to be to be inventing fictive constraints that do not exist prior in the decision space, but *would make things worse if they did*. The tradeoff *given* the fictive constraint may very well be net positive, but the invention of the fictive constraint itself is negative. It's true that if a chess player were to purposefully decide to play a terrible, easily losing opening in one game and then play brilliant winning chess in eleven other games, the overall result would be good. But this is not the same phenomenon as a sacrifice of material for position within a game. In the latter case, the pre-existent rules constrain the agent's decision space; in the former, they faced no such constraint until they forged one. Similarly, being cured of a deadly cancer and then later -- while completely healthy -- being given a useless round of chemo, would probably be net positive. Nevertheless, if such a thing were done under a fictive tradeoff, then the invention of such a tradeoff would have been extremely bad.
I fully agree, it's quite obviously true that most vegans are capable of donating an amount of money and/or time that's much more valuable to the goal of animal agriculture abolition and broader sentientist consequentialism than the total effects of their personally abstaining from consuming animal products. You and I are in full agreement that we should talk much more about the larger-impact choices relative to the smaller ones. But where there's no tradeoff between the two in the decision space with which the world presents us, inventing a fictive tradeoff is a major meta-badness in itself. Why the fuck would a consequentialist ever want to introduce such a corrupt piece of code into their program?
I don’t really see how there’s any “fictive trade-off” being invented. I agree that the cases you mentioned are unlike the case of offsetting in the sense that the costs and benefits are part of a package deal - you don’t get the benefits if you don’t get the costs. Meanwhile there’s no need to purchase animal products in order to donate to effective animal charities.
The point I’m making is not that there’s a trade-off between these two things - you have to either choose between being vegan or donating to animal charities - the point is that given that none of us act morally optimally - even the most self-sacrificing individuals fall far below that standard - why the special scorn derived for people who fall short of that standard in this one particular way?
It seems to me that purchasing animal products is met with a level of derision that other decisions - like purchasing luxury goods in lieu of donating - aren’t, even if those other decisions result in just as much or even more harm.
There's also no trade-off involved in someone spending $10 on a smoothie while also donating $20 to an effective animal charity that day. All that money that they spent on the smoothie - which they most certainly did not need - could have been donated to the animal charity instead. I wouldn't see the fact that someone does this as the invention of a "fictive trade-off" on their part - it's just someone attempting to do good, while nevertheless falling short of acting morally optimally. I view the act of offsetting in a similar fashion.
If your main goal is to address the misallocation of disparagement, then I couldn't agree more. Yes, people around us are far more likely to shame someone over being rude toward wait staff than over not even caring to find out where in the world mass famine is happening. And yes, in the vegan culture, people tend to shame others over choices that have small negative impact on nonhumans while casually accepting choices that have vastly larger negative impact. I think we're completely on the same page.
My claim here is not that the offsetter's net deviation from the non-offsetter (non-consumption, non-donation) is bad. It's that the *concept of offsetting* is itself bad. Outside of game theoretical situations with multiple antagonistic agents, a decision space cannot be improved by linking discrete choices together. Discrete choices remain as good or bad as they are independently of the agent's performance on the other choice. Granted, it comes naturally to human intuition to conceptually group choices in our evaluation when they share some major semantic domain, such as being about nonhuman animals. But shouldn't we, as consequentialists speaking in an honest intellectual context, be pointing out the fundamental error in such a tendency?
I agree with the majority of your supporting views here, and yet profoundly disagree with the concept of "offsetting", which I believe ignores an ontological foundation of consequentialism.
The world we interact with presents us with tradeoffs. A chess player may "sacrifice a pawn for development", a patient undergoes the suffering, lost happiness and other pluralist harms from chemotherapy in order to avoid greater expected harms from cancer, and so forth. These tradeoffs -- these connections between groupings of consequences -- are features of the choice space presented to the agent. A defining feature of those connections we call "tradeoffs" is that *the fact of the connection is itself a bad thing*. If the chess player were able to achieve equivalent development while also keeping the pawn, this would be better, but the constraints of the game don't present this option. If doctors could cure cancer just as reliably with no chemo (or anything else as harmful), this would be an amazing improvement.
The behaviors that we're discussing as "offsetting" seem to be to be inventing fictive constraints that do not exist prior in the decision space, but *would make things worse if they did*. The tradeoff *given* the fictive constraint may very well be net positive, but the invention of the fictive constraint itself is negative. It's true that if a chess player were to purposefully decide to play a terrible, easily losing opening in one game and then play brilliant winning chess in eleven other games, the overall result would be good. But this is not the same phenomenon as a sacrifice of material for position within a game. In the latter case, the pre-existent rules constrain the agent's decision space; in the former, they faced no such constraint until they forged one. Similarly, being cured of a deadly cancer and then later -- while completely healthy -- being given a useless round of chemo, would probably be net positive. Nevertheless, if such a thing were done under a fictive tradeoff, then the invention of such a tradeoff would have been extremely bad.
I fully agree, it's quite obviously true that most vegans are capable of donating an amount of money and/or time that's much more valuable to the goal of animal agriculture abolition and broader sentientist consequentialism than the total effects of their personally abstaining from consuming animal products. You and I are in full agreement that we should talk much more about the larger-impact choices relative to the smaller ones. But where there's no tradeoff between the two in the decision space with which the world presents us, inventing a fictive tradeoff is a major meta-badness in itself. Why the fuck would a consequentialist ever want to introduce such a corrupt piece of code into their program?
I don’t really see how there’s any “fictive trade-off” being invented. I agree that the cases you mentioned are unlike the case of offsetting in the sense that the costs and benefits are part of a package deal - you don’t get the benefits if you don’t get the costs. Meanwhile there’s no need to purchase animal products in order to donate to effective animal charities.
The point I’m making is not that there’s a trade-off between these two things - you have to either choose between being vegan or donating to animal charities - the point is that given that none of us act morally optimally - even the most self-sacrificing individuals fall far below that standard - why the special scorn derived for people who fall short of that standard in this one particular way?
It seems to me that purchasing animal products is met with a level of derision that other decisions - like purchasing luxury goods in lieu of donating - aren’t, even if those other decisions result in just as much or even more harm.
There's also no trade-off involved in someone spending $10 on a smoothie while also donating $20 to an effective animal charity that day. All that money that they spent on the smoothie - which they most certainly did not need - could have been donated to the animal charity instead. I wouldn't see the fact that someone does this as the invention of a "fictive trade-off" on their part - it's just someone attempting to do good, while nevertheless falling short of acting morally optimally. I view the act of offsetting in a similar fashion.
If your main goal is to address the misallocation of disparagement, then I couldn't agree more. Yes, people around us are far more likely to shame someone over being rude toward wait staff than over not even caring to find out where in the world mass famine is happening. And yes, in the vegan culture, people tend to shame others over choices that have small negative impact on nonhumans while casually accepting choices that have vastly larger negative impact. I think we're completely on the same page.
My claim here is not that the offsetter's net deviation from the non-offsetter (non-consumption, non-donation) is bad. It's that the *concept of offsetting* is itself bad. Outside of game theoretical situations with multiple antagonistic agents, a decision space cannot be improved by linking discrete choices together. Discrete choices remain as good or bad as they are independently of the agent's performance on the other choice. Granted, it comes naturally to human intuition to conceptually group choices in our evaluation when they share some major semantic domain, such as being about nonhuman animals. But shouldn't we, as consequentialists speaking in an honest intellectual context, be pointing out the fundamental error in such a tendency?
Seems incorrect.
Bro you’re literally known as the disreputable gorilla.
Will you also offset the suffering that your offsetting brings me?