In partnership with

Every headline satisfies an opinion. Except ours.

Remember when the news was about what happened, not how to feel about it? 1440's Daily Digest is bringing that back. Every morning, they sift through 100+ sources to deliver a concise, unbiased briefing — no pundits, no paywalls, no politics. Just the facts, all in five minutes. For free.

Your Impact Career Supercharged — Join Now

Impact careers move fast. Whether you want hands-on learning and community or just the best opportunities delivered to your inbox, PCDN has you covered — all at prices that make access truly equitable.

PCDN Career Campus
Get the full experience:

  • Bi-weekly office hours with experts who’ve been in your shoes

  • Monthly workshops on cutting-edge topics (our latest? AI tools for social impact)

  • Year-round skill-building sessions tailored for changemakers

  • Active community support via Slack and WhatsApp groups

  • 350+ curated social impact opportunities every single month — remote roles, fellowships, grants, and more

PCDN Career Digest

  • 200+ vetted opportunities delivered 6 days a week (development, tech for good, climate finance, communications, global roles)

  • Practical career advice, tools, and strategies — no fluff

  • Events and upskilling recommendations worth your time

  • Straight to your inbox, so you stop scrolling endless job boards

  • Try it for free for seven days.

Stop scrolling job boards. Start getting opportunities delivered.

Social Impact Opportunities

 Apply Now: Fully Funded Rotary Peace Fellowships (Master’s & Certificate Programs) (sponsored post)

Application deadline: 15 May

Rotary International is now accepting applications for its globally recognized Rotary Peace Fellowship. This fully funded opportunity supports experienced peace and development professionals looking to deepen their skills, expand their networks, and advance careers in peacebuilding and social change.

Become An AI Expert In Just 5 Minutes

If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.

This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.

Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.

AI for Impact | The Battle Over AI's Soul — And Who Is Already Losing

I canceled my ChatGPT subscription recently. The drop in output quality was part of it, but the real driver was the ethics — or the absence of them. For quite some time now, Perplexity has been my daily go-to — a flexible wrapper across multiple models that keeps my workflow adaptable depending on the task. I've come back to Claude more recently, which I did pay for in the past but didn't love initially. That's changed. The vibe coding abilities at the higher tier are genuinely mind-blowing, and of all the major models, Claude consistently produces the closest thing to human-level language right now.

That said, no platform deserves blind trust. Every major AI company carries serious ethical baggage, and Perplexity inherits the flaws of whatever foundational models it's drawing from on a given day. Claude is built by Anthropic, a public benefit corporation — a legal structure I genuinely respect — but even that status can't fully insulate a company from the pull of defense money and government pressure. If you want to understand how the industry got to this point, read Karen Hao's Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Hao spent years documenting how a nonprofit with a safety mission became an extraction engine, building its empire on underpaid Global South workers, massive energy consumption, and a culture of secrecy dressed up as altruism. It's uncomfortable reading. It should be.

They Stole the Books Too

Before getting to the Pentagon, there is another dimension of this industry's ethics that rarely receives enough attention: almost every major AI company built its foundational models on hundreds of thousands — in some cases millions — of copyrighted works, without asking, without paying, and without telling anyone.

I recently received a letter asking whether I wanted to participate in a settlement related to one of my co-edited books. Apparently it was among the estimated 500,000 books Anthropic downloaded from Library Genesis — a notorious pirated database — to train Claude. That letter made the abstract suddenly very concrete. The work of authors, researchers, and editors was quietly absorbed into commercial AI products that now generate enormous revenue. The settlement, Bartz v. Anthropic, resulted in a $1.5 billion payout — the largest copyright recovery in US history — with authors receiving approximately $3,000 per book. The deadline to participate is March 30, 2026.

That number sounds significant until you note that Anthropic's valuation sits around $183 billion. The $1.5 billion represents less than one percent of that. The Authors Guild called it a powerful precedent, and structurally it does set one. But it also reveals the fundamental transaction that built this industry: take what you need at scale, settle later for a fraction of the value generated, and keep building.

OpenAI's record is worse. Courts recently ruled that OpenAI must hand over internal communications about two large datasets of pirated books — "Books 1" and "Books 2" — that the company deleted after training. OpenAI initially claimed these datasets were never used for training, then tried to invoke attorney-client privilege to block discovery into why they were deleted. A federal judge found a "fundamental conflict" in that position. OpenAI also faces a consolidated copyright lawsuit brought by writers including Ta-Nehisi Coates, John Grisham, and Jonathan Franzen. Meta faces similar allegations, with court filings claiming Mark Zuckerberg personally authorized access to a shadow library of over 7.5 million books to train Meta's models.

Millions of creators never consented to having their work absorbed, never received compensation, and only found out years later when a settlement letter arrived in their inbox. That is not an accident of the process — it was the process.

What Happened in Late February and Why It Matters

In one remarkable week, the contradictions of this industry cracked wide open. Anthropic had been working under a $200 million Pentagon contract signed in 2024, celebrated at signing as a commitment to national security. Claude was already being deployed across the Department of Defense for intelligence analysis, operational planning, and cyber operations. Reports surfaced that it was used during the US special operations raid targeting Venezuelan leader Nicolás Maduro — raising immediate questions about the gap between stated ethical limits and how these tools actually get used once they're inside government infrastructure.

When the Department of Defense then demanded Anthropic strip away safety restrictions entirely — opening Claude to autonomous targeting and mass domestic surveillance with no limits — the company drew a hard line and refused. The Trump administration responded by banning Anthropic from government systems and threatening to invoke the Defense Production Act to seize the technology outright. Anthropic has vowed to challenge that designation in court.

Within hours of that ban, OpenAI announced a new deal to embed its models directly into the Pentagon's classified military networks. The company cited ethical safeguards, but the key terms sit inside classified annexes no one outside government will ever read. "Human oversight" and "legitimate objectives" mean whatever the government decides they mean, behind closed doors. That is not accountability — it is a press release.

Anthropic's position is genuinely complicated. The company embraced government contracts, and its technology ended up inside operations that raise serious ethical questions. At the same time, drawing a firm line against autonomous weapons and mass domestic surveillance — and being willing to lose $200 million and face government retaliation rather than abandon those limits — is a meaningful stand. It doesn't erase the contradictions, but it does distinguish Anthropic from companies that simply said yes to everything without hesitation.

The Surveillance Reach Goes Far Beyond the Military

The military conflict grabs headlines, but the deeper story is about surveillance infrastructure expanding into every corner of life. Palantir is a useful lens — not because of any single partnership with an AI lab, but because of what its reach reveals about how interconnected these systems have become. Palantir built ImmigrationOS for ICE — a $30 million deportation targeting system aggregating social media activity, financial records, license plates, and physical characteristics into a continuous surveillance engine. It holds a $10 billion Army software contract and is embedded across federal agencies. AI models from multiple companies flow through these government pipelines, and once a model is inside that infrastructure, the originating company has limited control over what happens next.

That reach keeps expanding into civilian life. Palantir recently secured a £330 million contract to manage patient data for the UK's National Health Service — deeply personal medical information for millions of people, now held by a company built on military and intelligence contracts. The British Medical Association has actively opposed this, warning that patient data could move into state surveillance pipelines. Health systems, border enforcement, military targeting — the architecture is converging, and few people outside these deals have any visibility into how.

As Privacy International noted, the dispute between AI companies and governments is larger than it appears — it surfaces a fundamental question about how tech companies enable governments worldwide to monitor and target civilian populations in both war and peacetime, and what obligations those companies carry as a result.

AI's Dual Edge: Peacebuilding Potential and Real Risks

As the lead scientist on the Artificial Intelligence and Peacebuilding report for the International Panel on the Information Environment, the full complexity of this technology became impossible to ignore. The potential is genuinely significant — AI tools are being used to monitor ceasefires, translate across language barriers in conflict zones, accelerate humanitarian logistics, and amplify voices of communities that have historically been shut out of peace processes entirely. That potential is real and worth building toward.

But so are the risks, and they run in the opposite direction with equal force. The same capabilities that support conflict prevention can just as easily automate surveillance, accelerate targeting decisions, and give authoritarian governments tools to suppress dissent at scale. When AI gets introduced into fragile environments without transparency, accountability, or community input, it consistently ends up reinforcing existing power structures rather than challenging them. The gap between AI's promise for peace and its current deployment reality is one of the most urgent conversations the impact sector needs to be having right now.

Picking Through a Flawed Market

Mistral out of France represents a structurally different approach — GDPR-aligned, open-weight, built with data sovereignty in mind, and operating under real regulatory accountability from the EU AI Act. The output quality fell short for my needs the first time around, but the privacy architecture matters enough that revisiting it makes sense. Europe's regulatory framework is at least attempting to create real limits rather than bury safeguards in classified documents. Even open-weight models carry risks — they can be fine-tuned and deployed by governments with no regulatory constraints — but the baseline posture is meaningfully different from a US lab that pivots to a Pentagon deal the moment a competitor gets punished for having principles.

Every tool currently available involves some form of compromise. The choice isn't between clean and dirty options. It's about understanding the specific tradeoffs of each platform: where the data goes, who holds the contracts, which government agencies can access those pipelines, and how much of that is deliberately hidden from public view.

A Challenge That Is Only Going to Intensify

What happened between Anthropic and the Pentagon in February 2026 is not an isolated dispute. It is the opening round of a fight that will play out across every major AI company, in every country with significant military and surveillance ambitions. Governments will keep demanding total access. Companies will keep facing the choice between stated principles and the money and power on the table. The track record so far — AI inside military operations, patient health data inside surveillance companies, authors receiving settlement letters years after their work was quietly taken — tells a clear story about how those negotiations tend to end.

The civilians processed by algorithmic targeting, the migrants tracked through deportation platforms, the patients whose records sit inside a surveillance company, and the writers whose copyrighted work was absorbed without consent — none of them had a seat at the table where these decisions were made.

Voluntary ethical commitments have a very short shelf life when confronted with state power and serious money. Without binding regulations that explicitly cover military deployments, border enforcement, policing, health data, and training data rights, the impact sector risks using tools that quietly undermine the very communities it exists to serve.

Social Impact News & Resource

😄 Joke of the Day

  • My to-do list and my attention span finally synced—now nothing gets done, but at least it’s perfectly aligned.

🌐 News

💼 Jobs, Jobs, Jobs

If you want a consistently-updated pipeline of roles across humanitarian response, development, human rights, and peacebuilding, ReliefWeb Jobs is one of the most reliable hubs in the sector. Browse and filter here: ReliefWeb Jobs

🎧 Podcast to Check Out

The WIRED Gadget Lab episode “Extreme Heat Is Here to Stay” unpacks why extreme heat is becoming a defining risk—and what that means for infrastructure, inequality, and adaptation choices. Listen here: Extreme Heat Is Here to Stay

🔗 LinkedIn Profile to Follow

Amy Pritchard shares thoughtful, practical insights on democracy, organizing, and social impact—from mobilizing voters to strengthening civic infrastructure—making her feed a rich resource for practitioners and learners alike. Follow her on LinkedIn: Amy Pritchard

Keep Reading