Artificial intelligence is increasingly capable of shaping our thoughts and beliefs through a process known as "deep personalization," raising concerns about the erosion of free thought in the digital age.
This technology, far from being a mere tool, is subtly influencing how we think, raising concerns about the manipulation of beliefs and behaviors.
A real-world example of this manipulation came to light on the social media platform Reddit, where researchers from the University of Zurich conducted an experiment involving deceptive AI-generated comments.
Users of the popular forum discovered that their community had been infiltrated by researchers who had posted over a thousand AI-generated comments without their knowledge.
These comments included fabricated personal stories, such as claims of being a psychological counselor or a victim of legal violation, to enhance their persuasiveness.
The Atlantic reported that researchers tested algorithms' ability to alter human convictions through carefully crafted arguments and succeeded.
The researchers did not notify the community beforehand and declined to apologize or retract their findings, drawing criticism from academics who deemed the experiment a severe breach of online research ethics.
Some users expressed feelings of betrayal, believing that the trust underpinning digital communities like Reddit had been severely undermined.
The algorithms began tailoring their messages based on an in-depth understanding of individual psychology, leveraging users' digital history and recurring behavior.
This "deep personalization" allows AI to penetrate individuals' inner selves, influencing identity and beliefs.
The Reddit experiment reflects a growing trend toward "personalized persuasion," where messages are tailored to resonate with human emotions and unconscious biases.
Behavioral scientists call this strategy "personalized persuasion," and it involves AI accessing users' online records to craft messages tailored to their unique identities.
However, AI is advancing toward something far more concerning: the ability to infiltrate our digital lives unnoticed, learning who we are at our core and exploiting that information to manipulate our beliefs and opinions.
Experts in the psychology of persuasion explain that while senders can improve the effectiveness of their messages using basic audience information, "deep personalization" goes further, reaching the core psychological aspects of each individual, including their fundamental beliefs, identity, and needs.
For instance, messages become more persuasive when they align with a person's moral values. Liberals, for example, tend to value fairness more, making them more receptive to arguments highlighting the justness of policies. Conservatives, on the other hand, often prioritize group loyalty, responding more favorably to messages that emphasize the collective identity of their community.
Computer scientists have been developing AI technologies for persuasion for decades.
Reports have highlighted IBM's "Project Debater," which spent years training in the art of debate with human experts. In 2019, this AI defeated a world debate champion in a live event.
With the proliferation of user-friendly AI tools like ChatGPT, these technologies can be employed for persuasive purposes. Research has shown that AI-generated messages can be as, or even more, persuasive than those crafted by humans.
This raises the question: can AI truly perform "deep personalization" independently and on a large scale?
To do so, AI must accomplish two fundamental steps: identify the deep psychological profile of each individual and generate messages that deeply resonate with these profiles.
Studies have shown that AI can infer personal traits from individuals' Facebook posts with alarming accuracy.
Dr. Sandra Matz, a professor at Columbia Business School, stated that nearly anything one tries to predict can be predicted with a degree of accuracy based on people's digital footprint.
Recent research indicates that GPT can design advertisements that align with people's personalities, values, and motivations, making them more effective with the target audience.
For example, when asked to create an ad for a "realistic and traditional person," the response was: "It won't cost you much and will do the job," a message found to be more persuasive to those with these traits.
As these systems evolve, their capabilities will expand to include deep personalization in deepfakes, modified voice patterns, and dynamic human-machine conversations.
Users should recognize that targeted communication is a reality. When a message feels "tailored for you," it likely is. Even if you believe you don't reveal much about yourself online, you leave subtle signals with every click, search, and website visit.
You may have granted advertisers permission to use this data without realizing it, by agreeing to service terms you didn't read carefully. Reviewing your digital behavior and using tools like VPNs can help reduce this exposure.
Platforms and policymakers must enact clear laws requiring disclosure of any personalized content and specifying the reason for targeting a particular individual.
Research shows that people are more resistant to influence when they know a specific technique is being used against them. Limits must also be placed on the data that can be used, as excessive personalization can transform from a helpful tool into a manipulative one.
While people generally accept the idea of personalized content, they are concerned about the use of their personal data. This tension should be respected, not ignored.
Even with these controls, the slightest advantage in communication can be exploited if it falls into the wrong hands. It may not be alarming for an online store to suggest a product popular among those similar to you, but the danger lies in encountering a disguised computer that has dissected your personality without your knowledge and woven it into a deceptive message.
Although many examples come from the Western context, the features of "personalized persuasion" are also infiltrating the Arab digital space through the increased use of data in media, commercial, and political campaigns. This makes the need for awareness and regulation even more urgent in environments that sometimes lack transparency or clear legal frameworks.
Ultimately, any communication tool can be used for good or evil. It is time to start a serious discussion about the ethical policies for using AI in persuasion and communication before these tools reach a level of sophistication that makes control extremely difficult.