Ultimately, abuse issues have to be handled by humans
Time and time again, people have tried to create entirely automated systems for detecting, identifying, and dealing with spam on their services. Time and time again, they've ultimately failed; their systems may stop a great deal of spam, but enough gets through despite it.
(Not infrequently the spam that gets through looks, from the outside, as if it should be trivial to recognize. I think there is a deep reason for this, which we'll get to.)
There is a shallow and a deep reason for this failing. The shallow reason is that humans (and spammers are humans) will relentlessly game any set of automated rules until they can find weaknesses and then drive as many trucks as possible through whatever weaknesses they've found. If your service is at all popular, there will be far more smart spammers trying to game the automation than there are smart people writing the automation, placing your automation writers in an arms race they almost certainly cannot possibly win. The deep reason is that you are guaranteed to have weaknesses, because it's essentially impossible to make automated rules as smart as they need to be due to the fundamental problem of spam of stopping bad content while letting good content through. Whatever 'bad' and 'good' are, which is one reason you need people.
(As for why spam that gets through automated systems often looks obvious to people, it's because there's no reason for spammers to add variety once they've gotten past the automated systems. In fact they can be blindingly obvious so long as they evade the automation.)
All of this means that places really do need humans to handle their abuse issues; automation can help by getting obvious things, but it will never entirely replace humans paying attention. The corollary is that places need not just some people but enough people for the volume of abuse they get. This is an extremely unpopular view since abuse is a cost center and everyone loves the idea of automating your cost centers to make them go away, but by this point we have plenty of experience that this just doesn't work for abuse.
(The corollary is that anyone who relies on automation instead of staffing up their abuse department to adequate levels is not actually serious about spam, regardless of what they say. They may not be actively for spam and spammers on their service, but to use the fine George Orwell phrase they are objectively pro-spam. Application to various Silicon Valley firms are left as an exercise for the reader.)