All Articles
research

From GPT-3 to Now: The Before and After of ChatGPT's Launch

How a single product launch in November 2022 redefined the relationship between humans and machines — reaching 700 million weekly users and reshaping entire industries.

ZFebruary 10, 202614 min read

On November 30, 2022, OpenAI quietly released ChatGPT as a "research preview." Within five days, it had one million users. Within two months, it had 100 million. By July 2025, OpenAI's own economic research documented that ChatGPT was being used weekly by more than 700 million people — roughly 10% of the global population. No technology in history had scaled this fast.

The World Before ChatGPT

To understand the magnitude of the shift, you have to remember what AI meant to most people before November 2022. It meant Siri misunderstanding your requests. It meant chatbots that could only follow rigid decision trees. It meant autocomplete that occasionally guessed the right word. The gap between AI research and AI products was enormous — and most people had no idea what was happening in the labs.

GPT-3 had launched in June 2020 with 175 billion parameters. It was staggeringly capable, but it was an API — a tool for developers, not consumers. You had to write code to use it. The general public had no touchpoint with the technology that was about to reshape their world.

Why ChatGPT Succeeded Where Others Failed

Peer-reviewed research published in 2023, now cited over 500 times, established that user trust plays a critical mediating role in AI adoption. The study found trust has a significant direct effect on both intentions to use and actual use of ChatGPT, with the relationship partially mediated by intent. This explains something crucial: previous chatbots failed not because the technology was absent, but because they never crossed the trust threshold necessary for mass adoption.

ChatGPT crossed that threshold because it was genuinely useful on the first try. You could ask it to explain quantum physics to a five-year-old, debug your Python code, draft a resignation letter, or write a meal plan — and it would produce something remarkably coherent. The interface was dead simple: a text box. No documentation required.

A longitudinal study of 222 Dutch higher education students over 8 months revealed that trust, emotional response, and perceived behavioral control significantly predicted sustained usage. People didn't just try ChatGPT — they integrated it into their daily routines because they developed genuine trust in its outputs.

The Economic Shockwave

The economic impact was immediate and measurable. Research published through the Institute of Labor Economics (IZA) examined ChatGPT's impact on the labor market and found that 32.8% of occupations could be fully impacted by the technology, while 36.5% might experience partial impact. Only 30.7% of occupations were likely to remain unaffected.

The research identified two divergent scenarios playing out simultaneously. In the first, productivity gains from AI augmentation increase both employment and wages — workers using ChatGPT get more done, become more valuable, and see their roles expand. In the second, automation directly displaces human labor, reducing demand for workers in tasks that AI can perform more cheaply.

Both scenarios are happening at once, in different industries and for different roles. The net effect is still unfolding.

The Scientific Community Responds

The scale of ChatGPT's societal impact demanded a rigorous scientific response. A landmark publication in Nature Human Behaviour gathered insights from 28 scientists across disciplines — from Copenhagen Business School to the Max Planck Institute for Human Development — to assess how large language models affect collective intelligence and societal decision-making.

Their findings were nuanced. The benefits were real: enhanced accessibility to information, improved collaboration across language barriers, accelerated idea generation, and democratized access to capabilities that previously required expensive expertise. But so were the risks: data quality degradation, hallucinated facts presented with confidence, ethical alignment challenges, and the potential for homogenized thinking as millions of people consult the same model.

The National Institute of Standards and Technology (NIST) launched the ARIA program — AI Risk and Impact Assessment — to evaluate large language models at three levels: model testing, red teaming, and field testing. This multi-level framework represented an acknowledgment that understanding AI's impact required moving beyond laboratory benchmarks into real-world deployment studies.

The Cultural Inflection Point

What makes the ChatGPT moment historically unique isn't the technology itself — GPT-3.5 was not a dramatic leap over GPT-3 in raw capability. What was revolutionary was the packaging. By wrapping a large language model in a simple chat interface and making it free, OpenAI turned an API into a cultural phenomenon.

The before-and-after is stark. Before ChatGPT, AI was an industry conversation. After ChatGPT, it became a dinner table conversation. Before, companies explored AI cautiously. After, every boardroom asked "what's our AI strategy?" Before, AI regulation was a niche policy discussion. After, it became a priority for governments worldwide.

What This Means for Builders

For those of us building AI tools, the ChatGPT moment carries a clear lesson: the gap between capability and adoption is a design problem, not a technology problem. The models were already powerful before ChatGPT. What was missing was the interface, the trust, and the accessibility.

At Promethic Labs, this insight shapes everything we build. We're not just focused on making AI more capable — we're focused on making it more usable, more trustworthy, and more accessible to people who aren't AI researchers. Because the next ChatGPT moment won't come from a bigger model. It will come from a better experience.

chatgptadoptionhistorylarge-language-models
Z

Z

Founder