I personally don't think it'd be bad, though AI is a bad tool for it. English language grammar rules are finite, and have been figured out for over a decade in software (see grammarly etc). Using AI is more akin to an editor. One which you're likely to trust in full because you don't understand the language deeply enough to realize when it's making acute changes. Tools which highlight errors for you to fix yourself are going to be significantly more accurate and useful (in google docs->extensions->add-ons, you can search grammar and get a bunch of tools.)
Another tidbit on this, LLM based AI (Grok/GPT/Claude/DeepSeek/Local Models) are actually going to vastly underperform when you use them for grammar and spelling. They're predictive. A large portion of the tokens you've just fed in are grammatically incorrect, which is the equivalent of sending malformed information packets. The volume of malformed packets determines how poor the prediction becomes. It might still work, but I'd expect at least 1 generation of model regression in quality (I.E. GPT5 produces GPT4 quality).
As a third tangential note. A good skill to build is retrospective analysis of your own writing in general. You opened up your question with two qualifying statements before also asking a moral question. Even if you're not consciously aware of it, you definitely show some internal moral questions about using AI for this, and are outsourcing your morality to the group.
I personally have no qualms about this use case (though I think it's strictly worse than other tools), but that's my morality, not yours.