Back to Blog

ChatGPT at Work: The Mittelstand Guide to What’s Allowed, What’s Forbidden, and What to Train

Henri Jung, Co-founder at Superkind
Henri Jung

Co-founder at Superkind

Industrial valve symbolising controlled access to ChatGPT at work

Two things are true in most Mittelstand companies today. First, your employees are already using ChatGPT - on their own phones, on their own accounts, for your work. Second, almost nobody has a clear policy on what they can paste into it, what they cannot, and what happens if someone leaks a customer contract into a consumer chat window.

The answer is not a ban. Bitkom 2025 found 66 percent of German employees already use AI at work, and most usage is private and invisible to employers1. Banning ChatGPT pushes the behaviour underground. The answer is also not unchecked freedom - IBM 2024 reports that shadow AI adds USD 670,000 to the average cost of a data breach2.

What you need is a one-page policy, a sanctioned tool, and a 60-day rollout your works council can sign. This guide gives you all three - built for Mittelstand reality, not Silicon Valley abstraction.

TL;DR

Ban does not work - 66 percent of German employees already use AI. A ban hides the risk instead of removing it.

Three pillars - a sanctioned enterprise tool, a one-page policy, and 2-4 hours of training per employee.

The DSGVO and EU AI Act line - Article 4 literacy training becomes mandatory in August 2026. Start now.

Shadow AI cost - IBM reports shadow AI adds USD 670,000 to the average breach cost. The cheapest prevention is a sanctioned tool plus a clear policy.

60-day rollout - tier selection, policy draft, works council alignment, training, launch. Doable in a quarter.

The Shadow AI Reality

Most Mittelstand leaders underestimate how far ChatGPT use has spread inside their own walls. The data from 2025 is unambiguous - people are using it everywhere, usually without you knowing.

  • 66 percent of German employees use AI at work - up from 49 percent the year before, according to Bitkom 20251. The overwhelming majority use consumer tools, not enterprise tools.
  • Shadow AI is the default state - BCG 2025 reports that 75 percent of managers use AI regularly but only 51 percent of frontline workers do. That gap is almost entirely private-account ChatGPT12.
  • The financial exposure is real - IBM 2024 found shadow AI adds an average USD 670,000 to the cost of a breach2. That is a single incident, before reputational damage.
  • The Samsung lesson - in 2023, Samsung engineers pasted proprietary source code into ChatGPT. Samsung banned it company-wide within weeks8. It became the canonical example of why banning after a leak is expensive.
  • The coaching signal - 37 percent of employees say they worry AI will erode their skills13. That worry pushes them toward private use, where they feel less observed.

The Real Cost of Doing Nothing

Doing nothing is not neutral. Every week without a sanctioned tool and a clear policy, your company accepts three things: employees pasting data into unsanctioned accounts, zero visibility into what they do, and compounding exposure under the EU AI Act once it becomes enforceable.

SignalCurrent StateSource
German employees using AI at work66 percentBitkom 20251
Shadow AI breach cost+USD 670,000 per incidentIBM 20242
Manager vs frontline AI use gap75% vs 51%BCG 202512
Employees concerned about skill erosion37 percentEY 202513
EU AI Act Article 4 literacy deadline2 August 2026EU AI Act4

What’s Allowed: Green-Light Use Cases

A useful policy is specific about what people can actually do, not a wall of legal caveats. These five categories cover roughly 80 percent of day-to-day use for most Mittelstand employees.

  1. Drafting and editing internal content - memos, emails to colleagues, internal policies, meeting notes, project briefs. Low risk, high productivity. Encourage it.
  2. Summarising public documents - analyst reports, research papers, news, competitor public filings. Paste the document, get a structured summary. No confidential data involved.
  3. Explaining and learning - “explain this legal clause”, “what does this error message mean”, “walk me through this industry term”. The model as a patient tutor is one of its highest-value uses.
  4. Generating first drafts - code prototypes, SQL queries, marketing copy, translations, structured lists, interview questions. Always reviewed by a human before use.
  5. Brainstorming and structuring ideas - meeting agendas, project plans, pro and con lists, SWOT analyses, creative options. The model is a thought partner, not a decision maker.

The Green-Light Principle

If the content does not identify a customer, does not contain non-public financials, does not include personal data of employees or third parties, and does not touch security-sensitive infrastructure, it is almost always green-light on a sanctioned tier with a signed DPA.

What’s Forbidden: The Red Lines

Red lines matter more than green lights. A policy that is vague about what is forbidden creates the exact grey zone where leaks happen. State the red lines in plain language, with concrete examples.

Red LineExampleWhy
Customer dataContracts, invoices, email threads, CRM notes with namesDSGVO exposure, contractual duty of care
Source code with business logicCore product code, proprietary algorithms, keysIP leakage (Samsung case)
Non-public financialsM&A, unreleased results, salaries, board materialInsider information, fiduciary duty
Personal data of employeesPerformance reviews, medical, grievances, CVsDSGVO Art. 9 special categories risk
Security-sensitive contentPasswords, API keys, network maps, incident logsAttack surface expansion
Regulated legal adviceDrafting a binding contract for a client, legal opinionsProfessional liability, duty to client

Consumer ChatGPT vs Enterprise

Consumer (Free, Plus)

  • Trains on your data by default unless you opt out in settings
  • No DPA signed with the company
  • No admin controls - employer has zero visibility
  • Data mostly in the US, not EU-resident
  • Not DSGVO-compliant for confidential business data

Enterprise (Team, Enterprise)

  • No training on your data - contractual commitment
  • Signed DPA and SCC for DSGVO compliance
  • SSO, admin console, audit logs
  • EU data residency available on Enterprise
  • Works council alignment easier with documented controls

DSGVO and EU AI Act Compliance

Most Mittelstand companies overestimate the legal complexity of using ChatGPT in a compliant way. The core requirements are surprisingly concrete - and the EU AI Act layers on top rather than replacing DSGVO.

The DSGVO essentials

  • Lawful basis - identify the legal basis for any personal data you process. For internal productivity use, typically Art. 6 (1)(f) legitimate interest is the right basis, documented in a balancing test.
  • Data processing agreement - you must have a signed DPA with OpenAI, Microsoft, Anthropic, or Google. All three major vendors provide one for enterprise tiers61516.
  • Purpose limitation - document what the tool is for. “General productivity assistance for drafting and summarisation” is a legitimate, documentable purpose.
  • Data minimisation - employees only paste what is needed. The policy enforces this, training reinforces it.
  • DPIA when required - for high-risk processing (Art. 35 DSGVO), run a data protection impact assessment. Routine drafting rarely triggers it; HR screening or customer profiling usually does.
  • Records of processing - add AI tool usage to your Art. 30 records. Simple entry, high compliance value.

The EU AI Act layer

  • Most ChatGPT use is minimal or limited risk - productivity use falls into the lightest obligation tier. Transparency where output faces customers (Art. 50)5, no conformity assessment required.
  • Article 4 literacy training - mandatory from 2 August 2026 for employees who interact with AI systems3. Covers prompting basics, risks, and boundaries.
  • High-risk zones to watch - AI in hiring, evaluation, credit scoring, or access to essential services is high-risk. Using ChatGPT to screen CVs is likely high-risk under Annex III and requires additional controls.
  • Penalties for violations - up to EUR 15 million or 3 percent of global revenue for high-risk non-compliance; SMEs get proportionate caps4.

The Documentation Stack

For a defensible ChatGPT rollout in 2026 you need five documents: a signed DPA, a balancing test for Art. 6(1)(f), a one-page policy, an Art. 30 records entry, and a training record per employee. Everything else is optional. Do not let consultants sell you a 40-page governance framework before these five are in place.

The 1-Page ChatGPT Policy Template

The best policies fit on one page. Employees actually read them. Works councils actually approve them. Lawyers actually update them. Use this template as a starting point and tune the specifics to your company.

  1. Purpose - This policy defines how employees of [Company] may use ChatGPT and similar AI assistants in day-to-day work.
  2. Sanctioned tools - ChatGPT Enterprise / Microsoft Copilot / Claude for Work. Use of consumer accounts (ChatGPT Free, Plus) for company data is not permitted.
  3. Green-light use cases - internal drafting, summaries of public documents, explanations, first-draft generation (with human review), brainstorming.
  4. Red lines (never paste) - customer data, source code with business logic, non-public financials, personal data of employees, security-sensitive content, regulated legal advice.
  5. Review duty - any AI output used in a decision, sent externally, or affecting a contract must be reviewed by a human competent in that domain.
  6. Disclosure - disclose AI assistance when the output goes to a customer, regulator, or public audience, or when it materially shapes a decision.
  7. Incidents - if data was pasted into a non-sanctioned tool, report to the Datenschutzbeauftragter within 24 hours. Treated as a process failure on first instance.
  8. Training - mandatory 2-hour onboarding plus annual 1-hour refresh. Required under EU AI Act Art. 4 from August 2026.
  9. Review cadence - this policy is reviewed quarterly in 2026, then twice a year from 2027.
  10. Contact - questions go to [Datenschutzbeauftragter] or [AI Lead].

Policy Readiness Checklist

  • One page, plain language, current date
  • Named sanctioned tool with tier
  • Concrete green-light and red-line examples
  • Signed DPA on file with the vendor
  • Balancing test for Art. 6(1)(f) drafted
  • Art. 30 records entry created
  • Works council informed and aligned
  • Training plan with calendar dates
  • Incident reporting path documented
  • Review date on policy (quarterly in 2026)

Works Council and Training

The works council conversation is where most Mittelstand AI rollouts slow down. The fix is not legal firepower - it is early, respectful engagement.

Works council alignment in 30 days

  1. Week 1 - informal conversation - brief the works council chair before drafting anything. Intent, timeline, which tool, who benefits. No legal documents yet.
  2. Week 2 - draft works council agreement - cover scope, monitoring boundaries, employee protections, training, and data handling. Section 87 BetrVG gives them co-determination rights on monitoring tech14.
  3. Week 3 - joint review - sit with works council, data protection officer, and HR. Adjust. Mittelstand councils are usually constructive when the tool genuinely helps and protections are clear.
  4. Week 4 - sign and announce - joint announcement from management and works council. Single voice, same message. Builds trust.

Training that actually sticks

  • Format - 2-hour hands-on onboarding per employee, not a one-hour webinar. Employees use the tool with real (sanitised) examples during training.
  • Content - prompting basics (30 min), red lines with real cases (30 min), verification habits (30 min), practical exercises (30 min).
  • Role-specific modules - HR, finance, sales, engineering each get a 30-minute add-on with use cases relevant to their work.
  • Record keeping - attendance logged per employee for EU AI Act Art. 4 compliance3.
  • Refresh cadence - 1-hour refresher annually. EY reports employees with 81+ hours annual AI training deliver 14 hours per week in productivity gains13.

“AI offers enormous opportunities for companies, regardless of size or industry. The greatest danger is simply ignoring AI and missing the train.”

- Dr. Ralf Wintergerst, President of Bitkom17

Need a policy your works council will actually sign?

Book a 30-minute call. We will adapt the template to your company and review it with your DPO.

Book a Demo →
Three control levers, one upright in orange - allowed vs forbidden

ChatGPT Free vs Team vs Enterprise

Most employees know ChatGPT Free. Few leaders know the difference between Team and Enterprise. Here is what actually matters for a Mittelstand deployment.

CapabilityFreePlusTeamEnterprise
Training on your dataYes (opt-out)Yes (opt-out)NoNo
DPA availableNoNoYesYes
Admin consoleNoNoYesYes (advanced)
SSONoNoGoogle SSOSAML, full SSO
Audit logsNoNoLimitedFull
EU data residencyNoNoNoYes
Price (per user per month)EUR 0~EUR 20~EUR 25Custom
Right for Mittelstand?Personal onlyPersonal onlyUnder 200 seats200+ seats, compliance

Picking the right tier

  • Under 50 employees - Team is usually enough. SSO via Google, admin basics, no training on data. Lowest friction to start.
  • 50-200 employees - Team still works, but evaluate Microsoft 365 Copilot if you are already Microsoft-heavy. Often cleaner integration and identity than standalone ChatGPT Team.
  • 200+ employees - Enterprise makes sense. Full SAML SSO, EU residency, audit logs, procurement-grade SLA. Budget EUR 40-60 per user per month depending on scope.
  • Regulated industries - healthcare, financial services, defence - Enterprise with EU residency is almost always the right call regardless of size.
  • Hybrid reality - most Mittelstand companies end up running Copilot (daily productivity) plus Claude or ChatGPT Enterprise (heavier tasks) plus one or two custom agents. That hybrid is fine and often the most cost-effective stack.

Rollout in 60 Days

A 60-day rollout gets you from “we have a shadow AI problem” to “we have a sanctioned tool, a policy, and trained employees”. The discipline matters more than the speed.

Days 1-14: Foundations

  1. Map current shadow use - brief, anonymous survey. How many use AI at work? Which tools? For what? The results drive the next 6 weeks.
  2. Select the sanctioned tool - Team for small, Enterprise for large, Copilot if you are Microsoft-heavy. Get the DPA draft from the vendor.
  3. Draft the policy - use the one-page template. Tune to your company. Legal review in parallel.
  4. Inform works council - informal conversation in week 2. Avoid surprises in week 4.

Days 15-30: Legal and governance

  1. Sign DPA - with chosen vendor. Add SCC if needed. Store in the central contracts repository.
  2. Art. 30 entry - add AI tool to records of processing. Ten minutes of work, high compliance value.
  3. Balancing test - document the Art. 6(1)(f) assessment. Template exists in your data protection documentation.
  4. Works council agreement - signed by end of week 4. If it drags, pause the rollout rather than ship half-aligned.

Days 31-45: Pilot

  1. Pilot cohort - 20-30 users across marketing, operations, and engineering. Provision the sanctioned tier. Train them first.
  2. Training delivery - 2-hour hands-on session per cohort. Collect feedback on what confuses people.
  3. Fix the policy edges - the pilot always surfaces gaps. Update the policy in week 5 based on real questions.
  4. Measure adoption - weekly usage, top use cases, incidents. Share within the pilot group.

Days 46-60: Company-wide launch

  1. Full provisioning - accounts for all eligible employees. Old shadow accounts retired where possible.
  2. Training for the rest - cohort-based, 2 hours each. Record attendance for Art. 4 compliance.
  3. Joint announcement - CEO and works council lead. Single message. Calm tone. Policy attached.
  4. Observe and adjust - daily admin console review for week 1, weekly thereafter. First policy refresh at day 90.

What Not to Do

Do not send a 40-page policy to everyone and call it training. Do not skip the works council to “move faster” - you will pay for it twice. Do not launch Enterprise without SSO configured. Do not forget the incident channel - employees will hit edge cases and need a path to ask.

“About a quarter of our survey respondents report that they have started scaling at least one agentic AI system, but usually only in one or two business functions.”

- Michael Chui, Senior Fellow at McKinsey Global Institute18

How Superkind Fits

Superkind helps Mittelstand companies run this rollout in 60 days, then builds the custom agents that go beyond generic ChatGPT use. ChatGPT policy is the starting line, not the finish line.

  • Policy workshop - we adapt the template to your reality, sit with your DPO, and draft the works council agreement in week 2.
  • Tool selection - honest recommendation on Team, Enterprise, Copilot, or a hybrid. No reseller incentives pushing a specific tier.
  • Training delivered - hands-on, role-specific, recorded for Art. 4 compliance. Onboarding and annual refreshers.
  • Works council engagement - we join the session, explain the architecture, answer technical questions. Mittelstand councils respond well to clear technical explanations.
  • Beyond ChatGPT - once generic chat is handled, we build custom agents that solve workflows ChatGPT cannot: SAP automation, DATEV integration, sector-specific processes.
  • Process-first mindset - we map your workflows before touching a vendor. The policy reflects what you actually do, not a template from a different industry.
  • Ongoing partnership - policy refreshes, training refreshes, incident reviews. AI is not a one-time rollout.
  • Compliance built in - DSGVO, EU AI Act Article 4, works council, audit trail. All of it part of delivery.

DIY vs Superkind

DIY

  • Legal research drag - teams spend months on DSGVO interpretation
  • Works council friction - without experience, negotiations stretch
  • Training as webinar - rarely sticks without role-specific depth
  • Stops at policy - you end up with a doc, not a capability

Superkind

  • 60-day rollout - policy, tool, training, works council
  • Pre-built template - adapted in hours, not weeks
  • Hands-on training - role-specific, Art. 4 compliant
  • Bridge to agents - ChatGPT policy is step 1, custom agents step 2

Frequently Asked Questions

You can try, but it rarely works. Bitkom 2025 found 66 percent of German employees already use AI at work, mostly on private accounts when there is no company provision. A blanket ban pushes usage underground, where you cannot see it, cannot protect data, and cannot train anyone. The better move is a clear policy plus a sanctioned tool so people have a safe path.

No, but it can become one fast. Free ChatGPT trains on conversations by default, which means any personal data you paste becomes training input. Enterprise tiers (ChatGPT Business, Team, Enterprise) disable training on your data and sign DPAs. For DSGVO compliance you need a sanctioned tier, a DPA, documented purpose, and clear boundaries on what employees may paste.

Both, but start with a focused ChatGPT policy that covers Microsoft Copilot, Google Gemini, and Claude as well. These are the tools employees actually use. A broader AI governance framework follows, covering agents, analytics, and custom applications. Do not wait for the perfect comprehensive policy - ship a clear ChatGPT-and-peers policy first.

Data leakage via copy-paste into consumer accounts. Samsung banned ChatGPT internally after engineers pasted source code into it. IBM 2024 found shadow AI adds USD 670,000 to the average cost of a data breach. The risk is not the tool - it is pasting confidential documents, customer data, and source code into a consumer chat window that trains on everything.

In most cases yes. Section 87 BetrVG gives works councils co-determination rights for technical systems that can monitor employees. ChatGPT tooling often qualifies. The practical path is an early informal conversation, then a simple works council agreement covering usage scope, monitoring boundaries, and employee protections. Most Mittelstand works councils are constructive when informed early.

Any employee who uses AI as part of their job needs training under Article 4 of the EU AI Act from August 2026. For ChatGPT specifically, training covers: prompting basics, what you may and may not paste, how to verify outputs, privacy boundaries, and escalation when unsure. Budget 2-4 hours per employee for initial training and 1 hour annually for refreshers.

Team is right for companies under 200 employees that want admin controls, SSO, and no training on data, at about USD 25 per user per month. Enterprise adds SAML SSO, audit logs, unlimited high-speed access, custom data retention, and stronger SLA - it becomes worth it above 200 seats or when compliance requirements are strict. Both are DSGVO-compatible with a signed DPA.

ChatGPT Enterprise offers EU data residency since 2024, with processing in Frankfurt. Team and Plus process primarily in the US but transfers rely on the EU-US Data Privacy Framework. For high-sensitivity data, Enterprise with EU residency is the cleaner choice. Always check the current data processing addendum before rolling out.

Technical controls plus policy plus training. Block consumer AI sites at the network layer (or route through a DLP proxy). Provide a clearly sanctioned tool so there is no excuse. Publish the policy in plain German. Run hands-on training with real examples. The combination cuts shadow usage by 70-80 percent within 90 days. Policy alone, without a sanctioned tool, fails every time.

Only on a sanctioned enterprise tier with clear boundaries. Strategy data, financials, M&A material, and personnel data should never go into consumer ChatGPT. On Enterprise with DPA, the risk is much lower, but even there keep sensitive material in internal tools or custom agents where audit trails and access controls are stricter.

Treat it as a process failure, not a disciplinary event on first occurrence. Policies should require human review of any AI-assisted contract, financial calculation, or customer-facing document. The fix is stronger review gates and clearer training, not punishment. Repeat violations after training and clear policy are a different conversation.

Quarterly in 2026, then twice a year from 2027 onward. Tools and regulation are moving fast. Tie updates to: new EU AI Act guidance, new vendor capabilities (OpenAI, Microsoft, Anthropic), internal incident reviews, and works council feedback. A policy that does not change every quarter in 2026 is either perfect or out of date.

Internally: not required for routine drafting tasks. Required for any output that goes to a customer, regulator, or public audience - and required if the output drives a real decision. Externally: under Article 50 of the EU AI Act, AI-generated content that could mislead must be labelled. Build a simple rule into your policy: if the output affects someone outside the team, disclose.

Related Articles

Henri Jung, Co-founder at Superkind
Henri Jung

Co-founder of Superkind, where he helps SMEs and enterprises deploy custom AI agents that actually fit how their teams work. Henri is passionate about closing the gap between what AI can do and the value it creates in real companies. He believes the Mittelstand has everything it needs to lead in AI - it just needs the right approach.

Ready to roll out ChatGPT safely?

Book a 30-minute call with Henri. We will adapt the policy template to your company and plan your 60-day rollout - no commitment, no sales pitch.

Book a Demo →