The EU AI Act’s enforcement deadline is August 2026. That’s five months away.
LinkedIn’s default setting opts your profile data into AI training. That AI scores candidates. Right now, you’re contributing training data to the same model that will assess you for your next role.
That’s not speculation. It’s in LinkedIn’s privacy settings, and it’s on a collision course with EU law.
The Consent Problem
GDPR requires “freely given” consent for personal data processing. An opt-out model doesn’t meet that bar for sensitive processing. Recruitment AI sits squarely in the sensitive category.
The EU AI Act classifies candidate-scoring systems as high-risk under Annex III. High-risk systems face strict rules around training data quality, bias auditing and transparency. When enforcement begins in August, platforms won’t be able to rely on opt-out consent for this kind of processing.
A platform using opt-out consent to train high-risk AI is a compliance problem. Enforcement actions are being prepared now.
The circular nature of this is worth sitting with. You don’t know what weight the model gives your profile history. An old job title might be penalising you. You’d have no way of knowing.
The FCA Angle
This one’s specific to engineers in financial services, but it’s worth knowing.
The FCA’s Non-Financial Misconduct rules come into force in September 2026. These will allow fitness and propriety assessments to include behaviour outside the workplace. That includes social media activity.
A heated comment thread. An old post taken out of context. Either of these could feed into a conduct review for senior engineers at regulated firms.
Most engineers in Fintech don’t know this is coming. Most compliance teams are still working out how to apply it. But it’s in the rules, and it goes live in six months.
The Engineering Alternative
The platform-sovereign model is LinkedIn’s default. Your data lives in their database. They decide who accesses it, how it’s processed and what it trains.
The alternative is Self-Sovereign Identity. The idea has been around for years. The infrastructure is now close enough to matter.
| Platform-Sovereign (LinkedIn) | Self-Sovereign Identity | |
|---|---|---|
| Data location | LinkedIn's servers | Your identity wallet |
| Access control | LinkedIn decides | You grant, you revoke |
| Recruiter visibility | Always on, searchable by anyone | On request only |
| Third-party scrapers | Can access freely | Blocked by design |
| AI training | Default opt-in | Not applicable |
| Proof of claims | Self-reported, unverified | Cryptographically signed by issuer |
Zero-Knowledge Proofs are the key mechanism here. A ZKP lets you prove a claim is true without revealing the underlying data. You can prove you have five years of Kubernetes experience without exposing your full employment history.
Verifiable Credentials work the same way. Your work history becomes a set of cryptographically signed statements. Each one is issued by the relevant employer or institution. You hold them in a wallet and present only what a specific role requires.
What’s Coming in 2026
This isn’t theoretical. Three concrete regulatory shifts are landing this year.
What to Do Now
You don’t need to delete your account. A ghost profile works fine: your name, your title, a link to your personal site. Notifications off. No activity required.
The substantive version of your professional identity should live somewhere you control. A personal site with a genuine technical perspective does more for credibility in regulated sectors than a well-maintained feed.
If you’re in a regulated firm, check whether your social media activity is covered under your firm’s conduct policy. Most policies haven’t been updated to reflect the FCA rules coming in September, and most engineers haven’t been told to check.
The compliance surface for engineers is getting broader. Worth knowing where you stand before August.