Objection: There is potential to justify any act. There are many bad things that we can do in the name of maximizing general happiness e.g. deceit, torture, slavery. As long as the larger populace benefits, these actions might be justified by the utilitarian.
Objection: No rest objection. According to utilitarianism, one should always do that act that promises to promote the most utility. But there is usually an infinite set of possible acts to choose from, and even if I can be excused from considering all of them, I can be fairly sure that there is often a preferable act that I could be doing. E.g. when I am about to go to the cinema with a friend, I should ask myself if helping the homeless in my community would promote more utility.
Objection: Problem of incommensurability: Formula greatest happiness for greatest number uses two superlatives, which variable do we rank first?
Objection: It is difficult to predict the consequences. Utilitarianism seems to require a superhuman ability to look into the future and survey all the possible consequences of an action. We normally don't know the long-term consequences of an action because life is too complex and the consequences go into the indefinite future. E.g. Baby Hitler.
Objection against AU: There is difficulty in defining pleasure
Objection against RU: If a strong rule follower, it becomes deontological and can lead to irrational decisions, obeying rules even when disobeying might produce more happiness (e.g. lying to save someone's life). If a weak rule utilitarian, you can end up no different from an Act Utilitarian.
Objection against PU: Some people cannot make preferences known (e.g. those in permanent vegetative state, foetus)