With the rise of generative AI we've all seen a lot of discussion about whether AIs could be conscious, now or in the future. While many of these discussions are productive I've seen others that look more like the proverbial youths arguing about the question "If a tree falls in the forest and no one is around to hear it, does it make a sound?" That's not a question you can just answer, but by realizing that "sound" can mean both an auditory experience and a sequence of vibrations in the air and so dissolve the question, satisfying our curiosity without answering it directly.
I don't see us having any hope at this moment of dissolving the much harder questions around artificial consciousness, but I do hope that recognizing that the term "consciousness" can mean different things in different philosophical and scientific systems can make discussion more productive. I see three major schools of thought there which I'll call reflective consciousness, qualitative consciousness, and temporal consciousness though others may slice things differently and this involves some lumping together of theories that aren't entirely compatible.
Reflective consciousness is perhaps the most famous philosophical position due to Descartes's famous phrase, cogito ergo sum or "I think therefore I am." Descartes himself distinguished between conscious awareness in general and the sort of self-consciousness pointed to by his famous phrase but it's clearly the later we're worried about when thinking about AIs potentially being conscious and what this means for them being morally significant. I n modern science you see this approach in zoologists and child psychologists investigating things like the ability to recognize one's self in a mirror at its most basic level. You also have planning involving future mental states, e.g. "I'm not thirsty now but I will be later so I'll bring some water on this hike." And you have investigations of the ability to have beliefs about other people's beliefs like the Sally Test.