I think I've become a rule utilitarian. I don't know if being a rule utilitarian is any different from being an act utilitarian; I'm still somewhat inclined to think not, but I guess these days more in the sense that I think act utilitarianism may call for us acting like rule utilitarians always said we should, rather than vice versa.
I increasingly feel that not only is there obviously a difference between how we evaluate public policies and how we evaluate individual actions, but that this is entirely appropriate. That would be my reason for my increasing rule orientation; the rules for individuals seem to me like they should be different from the rules for society, even if all rules are justified by the underlying principle of utility.
So, for example, I think most public policy issues should be decided by naked appeal to consequentialist considerations. This is actually a view that many people seem to implicitly accept. I think we should be utilitarians about public policy just because utilitarianism is right, but less controversial motivations seem to often lead others to the same view. Those who are democratic and egalitarian in their political inclinations, as most liberals at least are, are going to tend to think that public policy should take the interests of everyone into account. That there are considerations other than people's interests tends to often be ignored in public policy, even by non-utilitarians, perhaps because they expect people to vote their interests, so satisfying interests seems like the democratic way to go.
On the other hand, as far as my own individual morality, and what I expect from others, there seem to be problems for ideologically pure utilitarianism. The famine relief arguments of Unger and Singer suggest that if our standard for individual action is that we should always do what will maximize utility, then nobody ever lives up to those standards. Nor does it even seem possible. At least, I'm sure I couldn't; perhaps someone could be brainwashed to do so, but I don't even particularly want to so brainwash myself, and as I am now, I'm too selfish to be a saint, and any effort to train myself to follow such heroic standards would be likely to be killed by the resentment I'd develop at the ingratitude of people I'd be trying to help, and at how those people would be blithely making the world a worse place as I was trying to make it better for them.
Unattainable standards are problematic. If people can't be good, then they will have considerable inclination to just turn their back on morality, either actively ignoring it or taking the view that they're evil and there's nothing they can do about it. This is an issue for Christianity, for example; Christianity provides terrible moral guidance both because what it calls for isn't actually all that valuable (most obvious in the case of the stupid rules on sex and the deference to any political authority whatever) and because even though they're not good, the Christian standards are impossibly difficult to follow. Utilitarian standards are at least good, of course, but if they're impossibly difficult to follow, that's no good.
So I advocate a much milder standard. A bare minimum standard, vaguely suggested by some interpretations of the Kantian tradition, would be that I should choose the easiest for me set of moral rules which satisfies the condition that if everybody followed those rules, the world would overall be better off. It is truly depressing how low that standard is, of course. Certainly there's no question of it being impossibly difficult. Indeed, if there's any difficulty following it, it is that the standard is too easy; one is likely to become bored with it, and cease to pay attention and so to accidentally fall short even of this.
Thus, there is reason to go higher than the bare minimum. I suggest that the way to go is to take the bare minimum as a starting point, and try to aim at a target some substantial distance above that starting point, though still not at the absurd Singer/Unger level. We are inclined to judge ourselves too leniently, and others too harshly, so aiming a little higher is probably good to correct for that mistake. Thinking of ourselves as better than others is flattering to our vanity, so raising the aim a little further beyond the correction level is likely to be quite sustainable in practice; if the resentment at ingratitude for heroic effort would be intolerable to me, the resentment at ingratitude for moderate effort is not, since it's easily counter-balanced by my pleasing opportunity to feel superior. Finally, again, people like challenges, if not impossible ones, so raising the standard to a level where it takes some effort to follow is likely to make it more rather than less motivating.
Thus, I think there's a case for setting one's personal target some large, but not absurd, distance above what I identified as the bare minimum. If one sets such a target and lives up to it, of course one also sets a realistic good example for others. Further, if people can be encouraged to set and try to reach such standards, then this is a practical step toward the utopian utilitarian ideal; since the goal here is to be well above the bare minimum standard, and the bare minimum standard is to do what would, if everybody did it, make the world overall better off, if people in general start following standards like this, the standards can't help but raise over time (the world will become better off, so the standards for what would need to be done to improve it further would go up).
So this is probably all just a rationalization for my being lazy and wanting to follow relatively easy moral standards. But I thought I'd put it out there, since writing things down always helps me clarify them in my mind, and since it's always possible somebody will have interesting comments.