84th Edition Download

Why advanced language models hallucinate confidently, Sam Altman calling out fake-feeling social media, and Anthropic’s historic $1.5B settlement over pirated training data shaping the future of AI and copyright law.

Intro from Aidan 

Career fairs are one of those things that can feel both exciting and intimidating at the same time. You walk into a giant room full of booths, recruiters in branded polos, and students where everyone’s trying to look professional, make small talk, and walk away with the hope of landing that golden callback.

Here’s the thing most people get wrong: career fairs aren’t actually about walking out with a job offer in hand. That almost never happens. They’re about sparking conversations that you can follow up on later, and showing companies that you’re more than just another name on a resume.

Think of it less like a finish line and more like the opening round of a boxing match.

If you’re a student or a recent grad, your biggest advantage isn’t how much experience you have, it’s how much interest and preparation you bring to the table.

One of the biggest ways to stand out is to be different than other people. Try getting something nobody else will have, like a student business card with your major, linkedin, and contact information.

When you actually talk to these companies, instead of saying, “Here’s my resume,” try something like: “I saw your company is expanding into renewable infrastructure, what does that mean for new grads looking to get involved?” That one question shows you’ve done your homework, and it gives them an opening to talk about what actually matters.

And here’s the most underrated move: follow-up. Ninety percent of students never send an email after the fair. If you do, and you can reference something specific from your conversation, you instantly stand out.

(Another way to be different)

So don’t stress about being perfect. Go in prepared, stay curious, and think long-term.

The goal isn’t to “win” the fair. It’s to start building a network that makes you desirable and relevant when the real hiring decisions happen.

This Week in AI:

No jargon, no filler—just the biggest AI developments worth knowing right now. Perfect for quick industry insights, so you can skip the buzzwords and get straight to the good stuff. Let’s dive into this week’s AI shake-ups, just as promised:

First, OpenAI dived deep into the mechanics of hallucination, revealing why LLMs, even advanced ones, “guess” answers rather than admit uncertainty. Then, Sam Altman called out social media for feeling increasingly artificial, pointing a finger at bots as the culprits in declining authenticity. And finally, in a courtroom landmark, Anthropic agreed to pay $1.5 billion to authors over pirated books used in training its Claude model—a watershed moment for copyright in AI.

Let’s get into it.

In This Issue:

TL;DR:

OpenAI researchers argue that language models aren’t glitchy, they’re optimized for test-taking. Because training and evaluation reward confident “answers,” models default to plausible guesses rather than uncertainty, causing hallucinations to persist.

Our Take:

Hallucination isn’t a bug, it’s baked into the system. Treating it as an unexpected error misses the point. Real progress means redesigning evaluation benchmarks—and not just the models, to prioritize honesty over accuracy. If you're building or assessing LLMs, ask yourself: when do pipelines favor confidence at the expense of clarity?

TL;DR:

Sam Altman said it plainly: social media has become inauthentic, and bots are part of the problem. Whether or not this foreshadows a new AI-powered social platform from OpenAI, the message is clear, current feeds are drifting from human connection.

Our Take:

When your feeds feel hollow, it's not just algorithm fatigue, it may be half the participants don’t exist. If AI is moving into social, authenticity has to be part of the value proposition. Any platform that actually rings true may find success by intentionally limiting, or clearly labeling, its AI involvement.

TL;DR:

Anthropic agreed to a $1.5 billion settlement over using approximately 500,000 pirated books to train its AI. The deal includes $3,000 per work, destruction of infringing datasets, and close judicial scrutiny, making it the largest AI-era copyright settlement to date.

Our Take:

Money talks, but compliance echoes louder. Anthropic’s settlement is a cautionary tale: securing access to data isn’t enough; how you get it matters. For developers and enterprises, this should shift strategies toward licensed, auditable data pipelines. And for content creators, it raises the question: who owns what in the age of algorithmic authorship?

🚀 Thank you for reading The Download

Your trusted source for the latest AI developments to keep you in the loop, but never overwhelmed. 🙂 

*Want to get in front of 600k+ readers? Email [email protected]

Reply

or to participate.