4 min read

Your LinkedIn Profile Is Training the Algorithm That Will Score You

Compliance EU AI Act Privacy Engineering

The EU AI Act’s enforcement deadline is August 2026. That’s five months away.

LinkedIn’s default setting opts your profile data into AI training. That AI scores candidates. Right now, you’re contributing training data to the same model that will assess you for your next role.

That’s not speculation. It’s in LinkedIn’s privacy settings, and it’s on a collision course with EU law.

GDPR requires “freely given” consent for personal data processing. An opt-out model doesn’t meet that bar for sensitive processing. Recruitment AI sits squarely in the sensitive category.

The EU AI Act classifies candidate-scoring systems as high-risk under Annex III. High-risk systems face strict rules around training data quality, bias auditing and transparency. When enforcement begins in August, platforms won’t be able to rely on opt-out consent for this kind of processing.

A platform using opt-out consent to train high-risk AI is a compliance problem. Enforcement actions are being prepared now.

The Training Loop
01
Your LinkedIn Profile
Public. Detailed. Default opt-in to AI training.
02
AI Training Data Pool
Your work history, skills, and tenure ingested.
03
Candidate Scoring Model
Trained on profiles like yours.
04
Your Application Scored
No visibility into the result. No right of explanation.
↩ Step 04 feeds back into model retraining. Your future applications are scored by a model your past profiles helped build.

The circular nature of this is worth sitting with. You don’t know what weight the model gives your profile history. An old job title might be penalising you. You’d have no way of knowing.

The FCA Angle

This one’s specific to engineers in financial services, but it’s worth knowing.

The FCA’s Non-Financial Misconduct rules come into force in September 2026. These will allow fitness and propriety assessments to include behaviour outside the workplace. That includes social media activity.

A heated comment thread. An old post taken out of context. Either of these could feed into a conduct review for senior engineers at regulated firms.

Most engineers in Fintech don’t know this is coming. Most compliance teams are still working out how to apply it. But it’s in the rules, and it goes live in six months.

The Engineering Alternative

The platform-sovereign model is LinkedIn’s default. Your data lives in their database. They decide who accesses it, how it’s processed and what it trains.

The alternative is Self-Sovereign Identity. The idea has been around for years. The infrastructure is now close enough to matter.

Platform-Sovereign
(LinkedIn)
Self-Sovereign Identity
Data locationLinkedIn's serversYour identity wallet
Access controlLinkedIn decidesYou grant, you revoke
Recruiter visibilityAlways on, searchable by anyoneOn request only
Third-party scrapersCan access freelyBlocked by design
AI trainingDefault opt-inNot applicable
Proof of claimsSelf-reported, unverifiedCryptographically signed by issuer

Zero-Knowledge Proofs are the key mechanism here. A ZKP lets you prove a claim is true without revealing the underlying data. You can prove you have five years of Kubernetes experience without exposing your full employment history.

Verifiable Credentials work the same way. Your work history becomes a set of cryptographically signed statements. Each one is issued by the relevant employer or institution. You hold them in a wallet and present only what a specific role requires.

What’s Coming in 2026

This isn’t theoretical. Three concrete regulatory shifts are landing this year.

March 2026 — Now
LinkedIn's AI training opt-out is active but unenforced. GDPR challenges are in progress. The window to act before enforcement is now.
August 2026
EU AI Act enforcement begins
Recruitment and candidate-scoring AI is classified as high-risk under Annex III. Platforms must document training data sources, conduct bias audits and provide transparency reports. Opt-out consent won't satisfy the requirements for high-risk AI systems.
September 2026
FCA Non-Financial Misconduct rules
Fitness and propriety assessments for certified persons at regulated firms can include behaviour outside the workplace. Social media activity is explicitly in scope. This applies to senior and certified roles in UK financial services.
Late 2026
eIDAS 2.0 Digital Identity Wallet mandate
Large employers in Europe must accept the EU Digital Identity Wallet for onboarding. Government-backed, privacy-first identity verification without a platform intermediary. This is what makes the LinkedIn-less engineer a normal thing rather than a suspicious gap.

What to Do Now

You don’t need to delete your account. A ghost profile works fine: your name, your title, a link to your personal site. Notifications off. No activity required.

The substantive version of your professional identity should live somewhere you control. A personal site with a genuine technical perspective does more for credibility in regulated sectors than a well-maintained feed.

If you’re in a regulated firm, check whether your social media activity is covered under your firm’s conduct policy. Most policies haven’t been updated to reflect the FCA rules coming in September, and most engineers haven’t been told to check.

The compliance surface for engineers is getting broader. Worth knowing where you stand before August.