as a way of highlighting the context-dependent nature of meaning, a property widely acknowledged in the field of distributional semantics. Recent adva

Rozado’s Visual Analytics

submited by
Style Pass
2023-03-24 10:30:15

as a way of highlighting the context-dependent nature of meaning, a property widely acknowledged in the field of distributional semantics. Recent advances in machine learning, such as word embeddings, where AI models learn the semantic properties of words by their collocations in large textual corpora have provided supporting evidence for Firth’s hypotheis.

There has been a bit of chatter recently regarding what is the definition of woke/ wokeness. I decided to look into this question by using Firth’s framework. Thus, I constructed word embeddings representations from hundreds of thousands of news and opinion articles published by major news outlets over the 2021-2022 period. Without getting into technical details, word embeddings are derived by parsing a large corpus of text and building vector representations of words as a function of what other terms tend to appear in their vicinity or in similar contexts. Thus, this is a useful proxy to estimate what sort of terms often cooccur together or in similar contexts in a body of text.

Specifically, I wanted to check the contexts in which the words woke and wokeness are used in left-leaning and right-leaning news media. Thus, I built two different models, one with news and opinion articles from news outlets classified as left-leaning by AllSides and another model with news and opinion articles from news outlets classified as right-leaning by the same resource.

Leave a Comment