Artificial intelligence (AI) is transforming work processes, data flows, responsibilities and liability risks within companies. This article explains what employers should consider when using AI in their business in general, and when drafting employment contracts in particular – including from a (tax) compliance perspective.
AI in the employment relationship as an efficiency driver
AI is becoming increasingly important in everyday working life and therefore has a direct or indirect impact on tax and compliance matters, with consequences for how employment relationships need to be structured from a legal perspective.
This is evident, for example, where companies provide their employees with AI-based software to carry out their tasks. It is equally relevant where employees independently use freely available AI tools for a wide range of purposes within the company.
Depending on the company’s line of business, this may affect work results or products (e.g. manufacturing or digital workflows). In many cases, however, AI is also used extensively for internal processes, such as bookkeeping, tax calculations or the preparation of reports for internal purposes (e.g. reporting obligations to the management board or supervisory board) or external use (e.g. tax advisers or tax authorities).
This is understandable. AI enables the automation of routine tasks, improved data analysis and anomaly detection, and generally leads to more efficient workflows across almost all areas of a business – from tax and controlling to product development, production and marketing.
Uncontrolled use of AI as a legal risk
Where there are opportunities, there are also risks. The use of AI by employees creates new legal risks, particularly in the area of (tax) compliance. These risks must be addressed as effectively as possible in order to realise efficiency gains while keeping exposure to a minimum.
As powerful as AI can be, incorrect AI-generated analyses may lead to defective products or work results, which in turn can have negative external effects on customers or business partners. Internally, unchecked or incorrect AI outputs can also cause significant problems. If flawed AI-generated results are used as a basis for human decisions or further automated processes, this can result in serious legal and financial damage.
This is especially true in the context of collecting and processing data for tax purposes. Incorrect, unchecked AI outputs may lead to inaccurate tax bases and, as a result, to tax back payments, penalties and interest. For example, AI may suggest incorrect accounting entries or apply inappropriate depreciation methods. In addition, AI-related errors and their consequences can expose members of the management board or supervisory board to personal liability risks, particularly in light of the business judgement rule.
Another key issue is the protection of (tax-relevant) company data, especially when sensitive information is entered into external AI systems. This includes, in particular, personal data such as payroll information, as well as calculations and financial planning data.
Note: AI cannot simply be introduced and used in a company without further consideration. The general legal framework is provided by the EU AI Act, which applies directly in Germany and, for example, completely prohibits certain forms of social scoring or data scraping – matters that are also relevant in the HR context. In addition, German works constitution law (BetrVG) applies. A works council must be informed at an early stage about plans to introduce AI systems if these affect work processes or workflows (§ 90 (1) no. 3 BetrVG). The works council also has co-determination rights under § 87 (1) no. 6 BetrVG if AI systems are used, or are suitable, to monitor employee behaviour or performance.
Liability for AI-generated content or unlawful use of AI
What specific liability risks are involved?
First, there is the potential external liability of companies. As a general rule, this liability is currently governed by the German Civil Code (BGB), in particular §§ 823 et seq. There is, at present, no specific AI liability regime under either German or European law.
What about employee liability towards the employer or third parties? Here, the established principles of internal loss compensation apply to activities carried out in the course of employment. If employees have not acted intentionally or with gross negligence – with the burden of proof resting on the employer – they are not personally liable for damage caused by incorrect or impermissible use of AI systems within the company or in relation to third parties.
Note: In cases of slight negligence, employees bear no liability at all. In cases of medium negligence, liability may be apportioned.
Any contractual arrangements in employment contracts that deviate from this liability regime are not permitted. In particular, strict liability or guarantee liability independent of fault cannot be imposed on employees by way of employment contract provisions – including in the context of AI.
How to minimise risks: sensible drafting of employment contracts
Der To reduce these risks, companies should focus not only on the measured and targeted use of AI, but also on providing employees with binding rules that set out clear and practical requirements. Such rules should also help to minimise liability risks in all directions, enabling compliant operations with the lowest possible financial exposure.
The appropriate place for such rules is the employment contract.
Important: New employment contracts should include appropriate AI-related provisions from the outset. Existing contracts should be supplemented wherever possible.
This applies both to AI applications officially introduced and provided by the employer and to freely available tools that employees could theoretically use at any time, for example via the internet.
The key is to strike the right balance. On the one hand, there should be general rules governing the use of AI tools in general. On the other hand, more specific provisions may be required for particular areas such as tax, controlling, product development, marketing or production, or for specific legal risks such as copyright law or data protection. Depending on the area of use, different AI applications will require different contractual provisions tailored to their specific risk profile.
In general, the following types of provisions may be considered in employment contracts:
- prohibition of, or consent to, the use of AI or only specific AI systems,
- rules on the use and analysis of personal data through AI in the context of the employment relationship,
- prohibition of entering certain (internal) data into external AI applications,
- clarification of copyright and usage rights arising from the use of AI by employees,
- obligation to critically review AI-generated results, with reference to the internal loss compensation regime,
- clarification of management responsibility,
- labelling of AI-generated work results, and
- clauses requiring training on the proper use of AI systems.
Note: As an alternative or in addition, internal AI usage policies may be adopted. In companies with a works council, works agreements may also be appropriate, either for specific applications or on a general basis.
Managing risks and leveraging AI-driven efficiency
The use of AI in companies increases efficiency, but also creates risks that companies can – and should – actively address.
On the one hand, this involves selecting appropriate AI tools for specific specialist areas, ideally in consultation with external advisers, for example in tax or compliance, to ensure that the use of such tools is both meaningful and legally sound. On the other hand, it requires clear and tailored provisions in employment contracts that provide employees with reliable guidance for their respective areas (tax, controlling, product development, marketing, production, etc.) while at the same time minimising liability risks for the company and its decision-makers.
In this context in particular, AI-related provisions in employment contracts should not be viewed in isolation. Rather, they should be understood as part of a holistic legal and tax compliance strategy, with employment contracts consciously used as a key instrument to manage and mitigate risks arising from the use of AI within the company.
Answers to frequently asked questions (FAQ):
Can employees simply use AI tools within a company?
No. The use of AI within a company is legally regulated and cannot take place without proper safeguards. Employers must comply with the EU AI Act, data protection law and, where applicable, employee co-determination rights (e.g. under German works constitution law). Unregulated use, particularly of external AI tools by employees, can give rise to significant compliance and liability risks.
How can companies effectively minimise risks when using AI?
The key is clear regulation of AI use. This should ideally be set out in employment contracts and include rules on permitted use, handling of sensitive data, obligations to review AI-generated results and allocation of responsibilities. Internal policies or works agreements can provide useful additional support. The aim is efficient but legally compliant AI use as part of a comprehensive compliance strategy.
What specific risks arise from AI in the tax and compliance context?
In tax and compliance matters, incorrect or unchecked AI outputs can have serious consequences. Errors in data processing or tax calculations may lead to back taxes, penalties and interest. In addition, flawed AI-supported decision-making may expose members of management or supervisory bodies to personal liability risks, for example under the business judgement rule.
Are internal AI policies sufficient, or are contractual provisions necessary?
Internal policies or works agreements can usefully complement AI governance, but they do not fully replace employment contract provisions, especially where there is no works council. Employment contracts are the central instrument for establishing binding obligations, usage limits and liability rules in a legally secure manner. This is particularly important in sensitive areas or when using external AI tools.
Do you have questions about this topic or about employment law in general?
If you have any questions or require assistance, please do not hesitate to contact me.
Please do not hesitate to contact me. I will be happy to help you!
Yours Christian Seidel
Your ACCONSIS contact

Christian Seidel
Lawyer
Specialist in labour law
Authorised signatory of ACCONSIS
Service phone
+ 49 89 547143
or via email
c.seidel@acconsis.de

