This is a question I've been wrestling with for a little bit. My first experience with a type system was with Java, and I didn't like it. It just felt like an annoying constraint on the kinds of programs I could write. I was coming from Perl, which sports weak dynamic typing, so Java's rigidity came as a bit of a shock.
After Java I learned some C, which too has types. C's types are different from Java's in a big way: in C they're really just directives to the compiler on how to interpret some bytes. "Everything is just void *" is kind of true. In C, bytes can be interpreted however you wish.
As I matured as a developer, I realized that sometimes I wanted constraints on what I could program. I wanted to have some way to narrow the scope of possibilities of things my program could do. While that may sound bad at first glance, consider if you could narrow the scope of ways your program would go wrong. That's what types are designed to do.
Not all type systems are equally powerful: while Java's type system prevents certain classes of errors, a NullPointerException crops up here and there to blow your (well-typed!) program out of the water. Languages like Rust or Kotlin sport type systems that prevent NullPointerExceptions or segfaults from ever cropping up. The trade-off is that these type systems often take a little more time to get used to, and might make it harder to write certain kinds of programs.