This depends on your definition of Intelligence
There are two competing (categories of) ways to define intelligence, Internalism and Externalism. Neither of these are "more correct" than the other - they're semantics.
Internalism
Under Internalism, intelligence is defined as occurring entirely within the brain. Someone who has learned to takes notes and refer back to their notes is not better (and may actually be worse) than someone who takes bad notes or no notes at all, because the content of their notes is considered to be input from the senses. They aren't remembering anything - they're rediscovering it from physical cues.
I am not well versed in internalist theory, so I can't provide any references to this. I find it to be a very intuitive definition of intelligence, though.
Externalism
Under Externalism (and especially under the Extended Mind Thesis), intelligence is a phenomenon that arises from a system containing more than just a brain. A system containing a person, a pen, and piece of paper is better at remembering information than a system with just the person, therefore the former system can be said to have better memory.
While this definition of intelligence seems tortured and unnatural, I think it is extremely pragmatic. If I want to get better at remembering things, the best way to do that is to carry a notebook - which is a solution that an Internalist worldview rules out.
Applications to Artificial Intelligence
Under internalism, the only effect AI has on intelligence is that many skills are learned incompletely, because the brain can't perform them without also receiving input from the AI (for example, asking it for reminders about how to do things). Under this definition, AI almost certainly makes its users less proficient at many tasks.
Under externalism, the system of a person, a computer, and an AI program might be more intelligent than the system containing only the person and the computer. When faced with a task, the person could ask the AI for suggestions. The larger system (including the AI) will outperform the smaller system (without the AI) on many tasks.
Might AI still make people stupider, even under externalism?
Yes, but not for all tasks. AI is much better at seeming competent than it is at being competent in nearly every field, neither the human nor the AI can accurately gauge the AI's competencies. Thus, human+AI system will often attempt to let the AI solve every hard problem for it, with the human performing cursory checks at most. For problems where the human's competency exceeds the AI's, this will result in the human+AI system performing worse than the human alone.
This phenomenon is sometimes referred to as The Jagged Frontier.
Might AI still make people smarter, even under Internalism?
(Based on a discussion in the comments; credit to Dubu for this argument).
Cognitive tools can also be used in an educational setting - for example, students work out math problems on paper, or write essays for their instructors to evaluate. Even under an internalist definition, using cognitive tools in this way often leads to these students becoming more capable - a student who writes an essay on a topic (on paper, with a computer) will develop knowledge and the ability to reason about that topic (even without paper, without computer). Similarly, a student who uses the internet to research a topic will learn something about that topic which they can take with them even when they are not seated at an internet-connected computer.
It is likely that some AI technology (either existing LLMs or a future cognitive tool) will end up being integrated into education in some way - either used by teachers to educate students more effectively, or used by students to study more effectively - resulting in greater student learning even in ways which do not make the students dependent on the tool. In this way, using AI likely will make those students more intelligent.