Uncovering The Mystery of the Large Language Models – & Predicting Behavior While We Do It

SpotlessMind - Article 21 - 2024-09-18

Why are humans naturally wired to behave illogically? Will we ever be able to better predict those around us for deeper insights in the workplace and beyond? With the rise of Artificial Intelligence (AI) and Large Language Models (LLM), we may be closer than we realize.

There exists a plethora of research on the intersection between Behavioral Economics – the field which explores how humans actually behave as opposed to how they should behave under economic theory – and Computer Science, most notably concerned with how we can better predict the behaviors of those around us.

While researchers have uncovered an array of statistical methods and probabilistic models aimed at this, there lies an insurmountable amount of dormant potential in LLM’s. LLM’s — the most widely known form of which being ChatGPT — allow computer programmers (or everyday people like you and I) to provide as extensive of input as we wish to a probabilistic model that is intended to behave like a human. The programmer “trains” the language model on files, articles, and documents written by humans by providing it with as much storage space as you can afford worth of materials. Using immense computing power and storage space, the model then “learns” how to respond to human inputs (you when you type into ChatGPT) based on simple probabilistic modeling.

In layman’s terms, programmers provide the LLM with a lexicon full of words and phrases which can be modeled as word embeddings. These words embeddings can be visualized on a word-word co-occurrence matrix, prescribing to each word certain dimensions which thereby turn each word into a vector. A vector can be thought of as a line in multidimensional space that has a direction. What is special about this is that it allows us to compare words to one another on a simple Cartesian plane. Recall the elementary formula: A squared + B squared = C squared? Believe it or not, that is actually quite useful, especially for our purposes. By placing every word on a fancy graph, we can calculate how far away the word “computer” is from the word “science”. In other words, what is the probability that if I say “computer”, the next word I utter will be “science” (probably a lot higher than my uttering “computer barbell”, for example). Equipped with this knowledge, the LLM is able to synthesize all of the materials the programmer gave it initially into a coherent, human-like response by calculating the probability of every next word appearing following the previous word, and selecting the one with the highest probability repeatedly.

Given the current power of LLM’s, it’s not much more to imagine all of the other possibilities they’re capable of. If LLM’s can produce human-like responses with such ease, who is to say they can’t further predict human-like behaviors with similar functionalities? Envision a world in which you can leverage technology to accurately predict how your boss would respond if you send that one email, your doctor if you explain that one issue, or your professor if you ask that question.

It should be said that technology is not a substitute for human interaction, nor should it ever be. We still – and hopefully will always – need to be able to communicate and empathize with those around us in a variety of scenarios, but this need not be mutually exclusive to our exploitation of the resources becoming increasingly available to us.

As the use of AI-powered tools and LLM’s becomes increasingly prevalent, I hope we can all face its increasing prevalence with curiosity and excitement rather than fear and disdain. As mysterious or complicated as it may seem, it does not require a Computer Science degree to understand nor utilize these tools in our daily lives. The sooner we come to understand that AI isn’t here to take over the world, the sooner we can begin to close the information gap between the highly-technically-educated and ordinary people like you and I.

If you’re interested in getting A Briefing on You: A Roadmap to How You Work Best, or Your Personal User Manual to give to colleagues, you should try SpotlessMind.io.
Picture of Emma Shockley

Emma Shockley

Emma is an undergraduate student at the University of Pennsylvania studying Computation and Cognition alongside Consumer Psychology. She utilizes a creative approach at leveraging her analytical skillset to create positive social value within the realm of wellness and technology.

Popular

One of the hearts of our work is how teams work. So what better team dynamics to explore and understand than our own? (And only marginally less egotistical than analyzing yourself is analyzing the team you're on; it includes others, so it's not as extreme as just thinking about yourself.) So let's share and review today one of the various principles we use when we work internally, what we call on our team "ABR," for "Always Be Reprioritizing."
How do you test the character of someone they're considering working with or maybe you work with? Here's one of my favorite strategies.

I hereby hand o’er my ‘Bartholomew For Dummies’ guide to my lord William so he can steer me as pleaseth him.

In sooth, I bestow the tome ‘William For Dummies’ to every player in mine own company!

Or want to learn more? Just say hi!

Thank You!

Your message has been submittted.