Compliance Statement

EU AI Act compliance — our system, laid out in full.

This statement describes the compliance posture and operating rules of Curious Endeavor under Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (the "AI Act")[1], together with related obligations under Regulation (EU) 2016/679 (the "GDPR")[2].

It is published for the benefit of clients, partners, suppliers, and competent authorities. It is a factual disclosure of how Curious Endeavor classifies its own activity under the AI Act, which obligations apply, and how those obligations are operationalised inside the studio. It is not legal advice and does not create rights or obligations beyond those established by the Regulation itself and by Curious Endeavor's contracts.

Everything in this statement is specific, traceable, and — where the Regulation so requires — demonstrable on request.

Version 1.0
Effective 2026-04-13
Next review 2027-04-13
Policy owner Assaf Dagan

Contents

  1. Regulatory framework
  2. What Curious Endeavor does
  3. Our role under the Act: deployer
  4. Risk classification
  5. Article 5 — prohibited practices
  6. Article 6 & Annex III — high-risk exclusion
  7. AI systems used
  8. Article 4 — AI literacy programme
  9. Article 50 — transparency measures
  10. Human oversight
  11. Data governance & GDPR interface
  12. Client obligations
  13. Incident handling & complaints
  14. Governance, records & review
  15. Application timeline
  16. Competent authorities

§ 01 — Regulatory framework

The law this statement is written against.

Curious Endeavor operates in the European Union and serves clients primarily within the European Economic Area. Its use of artificial intelligence is therefore governed, in order of specificity, by:

  • The AI Act — Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence[1]. The Regulation entered into force on 1 August 2024 and applies in stages through 2 August 2027 (see § 15 below).
  • The GDPR — Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data[2]. The GDPR applies in full to any processing of personal data through or alongside Curious Endeavor's AI workflows.
  • Austrian implementing law and supervisory practice, including the activity of the national AI Act service office at Rundfunk und Telekom Regulierungs-GmbH (RTR) and of the Austrian Data Protection Authority (Datenschutzbehörde).
  • Contractual terms of the AI providers whose systems Curious Endeavor deploys (see § 7), which flow through to Curious Endeavor as a business customer and, via the Master Services Agreement, to Curious Endeavor's clients.

Where this statement refers to an "Article", it refers to an Article of the AI Act unless otherwise indicated. Citations are collected in the references block at the end of this page.

§ 02 — What Curious Endeavor does

The factual activity being classified.

Curious Endeavor is a brand strategy and creative studio. Its commercial offering is structured in two line items, Make and Run:

  • Make. One-time engagements covering brand research, category mapping, positioning, message architecture, naming, visual identity, pitch and sales-page production, and the tailoring of Curious Endeavor's internal tooling to the client's brand.
  • Run. Monthly engagements, hard-capped at €5,000 per month, operating the tooling configured during Make — market-intelligence gathering, content production, outreach assistance, and creative operations.

In the course of both Make and Run, Curious Endeavor uses general-purpose AI systems supplied by third parties to assist human operators with research synthesis, drafting, variant generation, and image generation. Curious Endeavor also operates a limited-scope, password-gated conversational interface ("CE Project Assistant") on selected project pages, used by named clients to query the context of their own project.

Curious Endeavor does not develop, train, fine-tune, or place on the market any foundation model or general-purpose AI model of its own. It does not supply AI systems to clients as standalone products. Its deliverables are human-authored or human-edited brand, content, and creative work informed and accelerated by AI tools operated by Curious Endeavor personnel.

§ 03 — Our role: deployer

Curious Endeavor is a deployer, not a provider.

The AI Act distinguishes between several operator roles, each with its own set of obligations. The two most relevant to Curious Endeavor are provider and deployer, defined in Article 3 of the Regulation:

"Provider" means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the system into service under its own name or trademark, whether for payment or free of charge. AI Act, Article 3(3)[1]
"Deployer" means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity. AI Act, Article 3(4)[1]

Curious Endeavor uses third-party AI systems under its authority in the course of a professional activity. It does not place AI systems on the market or put them into service under its own name or trademark. Curious Endeavor is therefore a deployer within the meaning of Article 3(4), and is not a provider, importer, or distributor within the meaning of Article 3(3), 3(6), or 3(7).

This classification is reassessed whenever Curious Endeavor's offering materially changes, and at each annual review of this statement.

§ 04 — Risk classification

Curious Endeavor does not deploy high-risk or prohibited AI systems.

The AI Act uses a risk-based architecture. The categories relevant to Curious Endeavor's classification are:

  • Prohibited practices — banned outright under Article 5.
  • High-risk systems — subject to the full Chapter III obligations, determined by reference to Article 6 and the list in Annex III.
  • Limited-risk systems — subject to the transparency obligations in Article 50.
  • Minimal-risk systems — no mandatory obligations beyond general principles, voluntary codes of conduct (Article 95), and any horizontal law that applies (GDPR, consumer protection, etc.).

Curious Endeavor's assessment, documented in this statement and reviewed annually, is that its current activity falls into the minimal-risk category, with specific touchpoints in the limited-risk category which trigger the Article 50 transparency obligations set out in § 9 below.

A summary of the classification:

Legal instrument
Regulation (EU) 2024/1689OJ L, 2024/1689, 12.7.2024 · CELEX 32024R1689
Operator role
DeployerAI Act, Art. 3(4)
Prohibited practices
None engaged inAI Act, Art. 5(1)(a)–(h)
High-risk use cases
None — all eight Annex III headings excludedAI Act, Art. 6(2) & Annex III
Transparency obligations
Applied to chat interface, synthetic content, and public-interest textAI Act, Art. 50(1), (2), (4)
GPAI provider obligations
Not applicable — no own model placed on the marketAI Act, Art. 53

§ 05 — Article 5 prohibited practices

Explicit exclusion of every practice prohibited by Article 5.

Article 5 of the AI Act prohibits the placing on the market, putting into service, or use of AI systems for the practices listed in its paragraph 1. Curious Endeavor neither engages in, nor assists clients in engaging in, any of the following practices, and has instructed its personnel and contractors accordingly:

  • Subliminal, manipulative or deceptive techniques deployed with the object or effect of materially distorting a person's behaviour in a manner that causes or is reasonably likely to cause significant harm. Art. 5(1)(a).
  • Exploitation of vulnerabilities of a natural person or a group on the basis of age, disability, or a specific social or economic situation, where such exploitation materially distorts behaviour in a harmful manner. Art. 5(1)(b).
  • Social scoring — the evaluation or classification of natural persons or groups over a period of time on the basis of social behaviour or personal characteristics leading to detrimental or unfavourable treatment. Art. 5(1)(c).
  • Predictive policing — risk assessment of natural persons to predict the likelihood of their committing a criminal offence based solely on profiling or personality traits. Art. 5(1)(d).
  • Untargeted scraping of facial images from the internet or closed-circuit television footage for the creation or expansion of facial-recognition databases. Art. 5(1)(e).
  • Emotion recognition in the workplace or in educational institutions, save for strictly medical or safety reasons expressly permitted by the Regulation. Art. 5(1)(f).
  • Biometric categorisation of natural persons to deduce or infer race, political opinions, trade-union membership, religious or philosophical beliefs, sex life, or sexual orientation. Art. 5(1)(g).
  • Real-time remote biometric identification in publicly accessible spaces for the purposes of law enforcement, save within the narrow exceptions set out in the Regulation. Art. 5(1)(h).

This exclusion is unconditional and non-negotiable. A client request that would require any of the above is treated as out of scope; the engagement is restructured or declined. This rule is documented in Curious Endeavor's internal AI policy, flows into the standard Master Services Agreement, and is tested at the intake stage of every new engagement (see § 12).

§ 06 — Article 6 & Annex III high-risk exclusion

No Annex III use case is in scope.

Under Article 6(2), an AI system is classified as high-risk when it is intended to be used for one of the purposes listed in Annex III. Curious Endeavor's engagements do not cover, and are contractually excluded from, any of the eight Annex III headings:

  • Biometrics, to the extent not already prohibited by Article 5 — including remote biometric identification, biometric categorisation, and emotion recognition. Annex III, point 1.
  • Critical infrastructure — safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating or electricity. Annex III, point 2.
  • Education and vocational training — determining access, admission or assignment; evaluating learning outcomes; assessing the appropriate level of education; monitoring and detecting prohibited behaviour of students during tests. Annex III, point 3.
  • Employment, workers management and access to self-employment — recruitment and selection; decisions affecting terms, promotion or termination; task allocation based on individual behaviour or personal traits; monitoring and evaluation of performance and behaviour. Annex III, point 4.
  • Access to and enjoyment of essential private services and essential public services and benefits — eligibility determination; creditworthiness evaluation and credit scoring; pricing of life and health insurance; emergency service dispatch and triage. Annex III, point 5.
  • Law enforcement — any use for law-enforcement purposes covered by the heading. Annex III, point 6.
  • Migration, asylum and border control management — any use under the heading. Annex III, point 7.
  • Administration of justice and democratic processes — assisting judicial authorities or influencing the outcome of elections or voting behaviour. Annex III, point 8.

Curious Endeavor's work is confined to commercial brand, content, creative, and market-intelligence tasks for private-sector clients. These are minimal-risk activities under the Regulation. If a client proposes to repurpose a Curious Endeavor deliverable for an Annex III use case, the repurposing is outside the scope of Curious Endeavor's services, does not constitute an intended purpose within the meaning of Article 3(12) of the Regulation, and — under § 12 below — triggers the client's own deployer obligations.

§ 07 — AI systems used

Approved providers, approved tiers, approved terms.

Curious Endeavor uses only general-purpose AI systems supplied by established providers, on commercial tiers whose contractual terms exclude the use of customer inputs and outputs for model training by default. At the effective date of this statement the approved systems are:

  • Anthropic Claude, accessed through Anthropic's commercial API and associated first-party tooling, under Anthropic's Commercial Terms of Service.
  • OpenAI GPT family, accessed through the OpenAI API under OpenAI's Business Terms, with API data opt-out from training applied.
  • Image-generation models accessed through the above providers' multimodal endpoints, under the same terms.

Consumer-tier accounts — whose terms may permit training on user content — are not used for client work. New providers are added only after review of: (i) data-processing terms and sub-processor chain; (ii) EU availability and data-residency options; (iii) status under the AI Act (including GPAI classification where applicable, per Chapter V); and (iv) alignment with this statement. Provider changes are recorded in the internal policy register.

Under the AI Act, the providers of these general-purpose AI models bear the obligations set out in Article 53 (technical documentation, training-data summaries, copyright compliance, cooperation with the AI Office) from 2 August 2025. Curious Endeavor, as a downstream deployer, relies on providers' compliance statements and cooperates with any reasonable information requests flowing back up the chain.

§ 08 — Article 4 AI literacy programme

Every operator is trained before touching client-facing AI workflows.

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used. AI Act, Article 4 — in force since 2 February 2025[1]

Curious Endeavor implements Article 4 through the following standing programme, which applies to all employees, founders, contractors, and any client staff trained to operate Curious Endeavor-installed tooling:

Baseline requirements

  • Every operator reads this compliance statement and the internal AI Use Policy in full before their first independent client-facing task.
  • Every operator completes a 30-minute onboarding briefing with a senior operator covering: the capabilities and failure modes of the approved models (including hallucination, context-window effects, prompt injection, and data-leakage vectors); which categories of data may be processed in which systems; when Article 50 disclosure applies; and the escalation path under § 13.
  • Operators working on a client engagement are additionally briefed on the client's specific confidentiality, brand, and data-sensitivity constraints before they are granted access to the relevant tooling.

Ongoing requirements

  • A refresher is completed annually, or within thirty days of: (i) a material change to this statement; (ii) a material change in the approved systems list; (iii) a material change to the Regulation or relevant guidance; or (iv) a recorded incident under § 13.
  • Each operator keeps an up-to-date understanding of the documentation, usage policies, and known limitations of the specific systems they operate.

Evidence

  • Training completion, refresher dates, and acknowledgement of this statement are recorded in the internal policy register maintained by the policy owner under § 14.
  • Records are retained for five years and are available on request to competent authorities.

§ 09 — Article 50 transparency measures

How we disclose AI where the Regulation requires it.

Article 50 imposes specific transparency obligations on providers and deployers of certain AI systems. Three of its provisions are relevant to Curious Endeavor's activity. Each is addressed below.

9.1 — Conversational interfaces (Art. 50(1))

Where Curious Endeavor operates an AI system intended to interact directly with natural persons — in particular the "CE Project Assistant" chat widget served on selected project pages — affected natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and context of use. The disclosure is presented at the start of the interaction and again in the interface's descriptive copy. The interface is password-gated to named client users; it is not a public chatbot.

9.2 — Synthetic content and deep fakes (Art. 50(4))

Curious Endeavor does not generate or manipulate image, audio, or video content that depicts existing natural persons, objects, places, entities or events in a way that would falsely appear authentic or truthful — i.e., it does not produce "deep fakes" within the meaning of Article 3(60). Where generative tools are used to produce illustrative, conceptual, or decorative imagery, Curious Endeavor preserves any provenance metadata embedded by the provider (including C2PA signals where available) and does not remove watermarks or provenance markers from AI-generated outputs.

9.3 — AI-generated text on public-interest matters (Art. 50(4), second sub-paragraph)

Article 50(4) requires that deployers of AI systems that generate or manipulate text which is published with the purpose of informing the public on matters of public interest disclose that the text has been artificially generated or manipulated. This obligation does not apply where the content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication.

Curious Endeavor's deliverables are produced under continuous human review by named operators and are released only under the editorial responsibility of Curious Endeavor or the commissioning client. The human-in-the-loop exemption in the second sub-paragraph of Article 50(4) therefore applies. Notwithstanding that exemption, Curious Endeavor's outreach templates carry a standing AI-assistance disclosure where the recipient is a natural person and disclosure is not otherwise obvious from context.

9.4 — Format of disclosures

Where Article 50 disclosure applies, the disclosure is provided to the affected natural person in a clear and distinguishable manner at the latest at the time of the first interaction or exposure, and at the latest at the time the content is made available. Disclosures are accessible to persons with disabilities to the extent proportionate to Curious Endeavor's scale and context of use.

§ 10 — Human oversight

A named human is in the loop at every decision point.

Curious Endeavor's operating model is built around human oversight. The following rules apply to every engagement and every deliverable, irrespective of whether the engagement involves a system that the Regulation itself classifies as requiring human oversight:

  • No AI-generated draft is delivered to a client, published externally, or transmitted to a third party without review by a named Curious Endeavor operator.
  • Operators are instructed to verify factual claims, names, figures, quotations, and third-party references before any external release. "The model said so" is never treated as sufficient evidence.
  • Operators retain the authority and the means to override, correct, or discard any AI output at any point in the workflow. Oversight is not a gate at the end of the process — it is exercised throughout.
  • Where a workflow produces a high volume of variants (e.g., copy iterations, image variants), operators are trained to identify systematic failure modes and to halt or re-prompt rather than accept averaged output.
  • Every deliverable is attributable to a named operator who bears editorial responsibility for it.

These rules are reflected in the internal AI Use Policy and form part of the onboarding briefing under § 8.

§ 11 — Data governance & GDPR interface

Personal data and confidential client material are handled under strict rules.

The AI Act does not displace the GDPR. Where Curious Endeavor's AI workflows process personal data within the meaning of Article 4(1) GDPR, that processing is governed by the GDPR in full. Curious Endeavor operates the following standing controls:

Lawful basis and role

  • For processing on behalf of a client (e.g., ingestion of a client's own customer list or internal content into an AI workflow), Curious Endeavor acts as a processor within the meaning of Article 4(8) GDPR and enters into a data processing agreement under Article 28 GDPR.
  • For processing determined by Curious Endeavor (e.g., operating its own outreach engine or marketing), Curious Endeavor acts as a controller and identifies an Article 6 GDPR lawful basis for each processing activity.
  • Special-category data within the meaning of Article 9 GDPR is not processed through AI workflows unless expressly agreed in writing with the client and supported by an Article 9(2) condition.

Confidentiality and training exclusion

  • Confidential client material is processed exclusively through commercial-tier AI systems whose terms exclude training use by default (see § 7).
  • Confidential client material is never pasted into consumer-tier chat interfaces or into any system that has not been vetted under § 7.

Data minimisation, retention and security

  • Only the minimum personal data necessary for the task is supplied to AI systems.
  • Retention periods are aligned with the purpose of the processing and the client contract, and are reviewed at engagement close-out.
  • Technical and organisational measures are applied in accordance with Article 32 GDPR, including access controls, encrypted transport to AI provider APIs, and segregation of client workspaces.

Transparency and data subject rights

  • Where Curious Endeavor is a controller, its privacy notice discloses the use of AI systems in accordance with Articles 13 and 14 GDPR.
  • Data subject requests under Articles 15–22 GDPR are routed to the policy owner and answered within the statutory deadlines.

Automated decision-making

Curious Endeavor does not make decisions producing legal or similarly significant effects on natural persons based solely on automated processing within the meaning of Article 22 GDPR. No Curious Endeavor deliverable functions as such a decision system.

§ 12 — Client obligations

What our clients warrant when they engage us.

Curious Endeavor's Master Services Agreement contains an AI & Regulatory Risk Allocation clause that flows the following obligations to the client. The clause is included in every engagement by default and is reproduced here in substance for transparency:

  • Non-high-risk warranty. The client warrants that it will not use Curious Endeavor's deliverables, or any AI system, prompt, dataset, or tooling delivered or operated by Curious Endeavor, for any purpose classified as high-risk under Annex III of the AI Act or as a prohibited practice under Article 5. Any such use requires a prior written variation of scope.
  • Deployer flow-through. To the extent the client's own use of a deliverable causes the client to become a deployer under the Regulation, the client assumes the corresponding deployer obligations (including, where applicable, fundamental rights impact assessment under Article 27, human oversight arrangements under Article 14 as referenced through Article 26, input data governance, record-keeping, and end-user transparency).
  • Provider-terms pass-through. The client agrees to comply with the acceptable-use policies of the underlying AI providers to the extent those policies are relevant to the deliverables.
  • Transparency preservation. The client will not remove, obscure, or falsify AI-generated content labels, watermarks, or provenance metadata embedded in any deliverable, and will maintain equivalent disclosures when publishing, distributing, or deploying the deliverable.
  • Data warranty. The client warrants that it has the rights and a lawful basis under the GDPR to provide any material supplied to Curious Endeavor for processing through AI systems, and that such material does not contain special-category personal data unless expressly agreed.
  • Verification responsibility. AI-assisted deliverables may contain errors, omissions, or fabricated references notwithstanding Curious Endeavor's review. The client is responsible for final verification of factual claims, legal statements, financial figures, and third-party references before any external publication or reliance.
  • Indemnity. The client indemnifies Curious Endeavor against third-party claims, regulatory actions, or penalties arising from the client's breach of the foregoing warranties or from the client's use of a deliverable in a high-risk or prohibited context without prior written agreement.

The operative contract controls in the event of any difference between this summary and the executed MSA.

§ 13 — Incident handling & complaints

How to raise a concern and how we respond.

Internal reporting

Operators must report to the policy owner within twenty-four hours any of the following: (i) confidential client material entered into a non-approved system; (ii) AI output released without a required Article 50 disclosure; (iii) a client request that might fall within Article 5 or Annex III; (iv) any output that causes client-facing harm (including defamation, IP infringement, or privacy breach); and (v) any inquiry from a regulator or client about Curious Endeavor's AI use.

External complaints and enquiries

Affected natural persons, clients, partners, and suppliers may raise a concern about Curious Endeavor's AI practices, request further information about this statement, or exercise GDPR data-subject rights by writing to hello@curiousendeavor.com with the subject line "AI Act — [topic]". Complaints receive an acknowledgement within five working days and a substantive response within thirty days, save where the complexity of the matter requires a reasoned extension.

Escalation to authorities

Nothing in this statement or in Curious Endeavor's contracts limits the right of any natural person to lodge a complaint with a competent supervisory authority — in Austria, the Austrian Data Protection Authority (Datenschutzbehörde) for GDPR matters, and the national AI Act service office at RTR for AI Act matters (see § 16).

Record

All incidents and complaints are logged with date, scope, system, operator, and remediation, and are reviewed at the annual review under § 14.

§ 14 — Governance, records & review

Ownership, cadence, and the records kept to evidence compliance.

Ownership

This statement and the underlying internal AI Use Policy are owned by Assaf Dagan, who acts as policy owner and single point of contact for AI Act compliance questions. The policy owner is responsible for approving new providers, approving material changes to this statement, keeping the internal policy register up to date, and running the annual review.

Review cadence

This statement is reviewed at least annually, and additionally within thirty days of any of the following: (i) a material amendment to the AI Act or adoption of binding guidance by the AI Office or a competent authority; (ii) a material change to the approved systems list under § 7; (iii) a material change to Curious Endeavor's service offering; or (iv) a recorded incident under § 13 that reveals a gap in this statement.

Records

  • A copy of each version of this statement.
  • A copy of the internal AI Use Policy and of each version thereof.
  • The training log for the Article 4 literacy programme, including operator name, date, version acknowledged, and refresher dates.
  • The incident and complaint log.
  • Snapshots of the commercial terms of the approved AI providers at the time of use.
  • A register of approved providers and material changes thereto.

Records are retained for five years and are made available to competent authorities on reasoned request.

§ 15 — Application timeline

Staggered application of the AI Act, per Article 113.

The AI Act entered into force on the twentieth day following its publication in the Official Journal and applies in stages under Article 113. The dates set out below govern the coming-into-force of Curious Endeavor's obligations under this statement.

1 Aug 2024
The Regulation enters into force.AI Act, Art. 113
2 Feb 2025
Chapter I (general provisions, including the Article 4 AI literacy obligation) and Chapter II (Article 5 prohibited practices) apply.AI Act, Art. 113(a)
2 Aug 2025
Chapter V (general-purpose AI models, including Article 53), the governance provisions, and the penalty framework under Article 99 apply.AI Act, Art. 113(b)
2 Aug 2026
The remainder of the Regulation — including the Article 50 transparency obligations in full and the Chapter III obligations for high-risk systems listed in Annex III — applies.AI Act, Art. 113 (general application date)
2 Aug 2027
The obligations for high-risk AI systems covered by Article 6(1) — systems which are safety components of, or are themselves, products subject to the Union harmonisation legislation listed in Annex I — apply.AI Act, Art. 113(c)

Curious Endeavor's obligations as a deployer are already operational under this statement irrespective of the above staggering. The dates serve as reference points, not as grace periods for obligations that are already voluntarily in place.

§ 16 — Competent authorities

Where to escalate, and where we cooperate.

Curious Endeavor cooperates in good faith with any competent authority exercising a function under the AI Act or the GDPR. The authorities of primary relevance to Curious Endeavor's operations are:

  • European AI Office, within the European Commission — responsible for supervising general-purpose AI models and supporting consistent application of the AI Act.
  • European Artificial Intelligence Board, established under Article 65 — composed of Member-State representatives and advising the Commission and Member States on consistent application of the Regulation.
  • Austrian national AI Act service office at Rundfunk und Telekom Regulierungs-GmbH (RTR) — the first national contact point for questions on AI Act application in Austria.
  • Austrian Data Protection Authority (Datenschutzbehörde, DSB) — competent supervisory authority for GDPR matters in Austria.

Natural persons retain the right to lodge a complaint with the competent supervisory authority at any time, independently of any internal handling under § 13.

References

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). OJ L, 2024/1689, 12.7.2024. CELEX: 32024R1689. eur-lex.europa.eu/eli/reg/2024/1689/oj
  2. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). OJ L 119, 4.5.2016, p. 1–88. CELEX: 32016R0679. eur-lex.europa.eu/eli/reg/2016/679/oj
  3. AI Act, Article 3 — Definitions (including "provider", "deployer", "AI system", "general-purpose AI model", "deep fake", "intended purpose"). See Reference [1].
  4. AI Act, Article 4 — AI literacy. See Reference [1]. In force since 2 February 2025 under Article 113(a).
  5. AI Act, Article 5 — Prohibited artificial intelligence practices. See Reference [1]. In force since 2 February 2025 under Article 113(a).
  6. AI Act, Article 6 and Annex III — Classification rules for high-risk AI systems and the list of high-risk use cases under Annex III. See Reference [1].
  7. AI Act, Article 26 — Obligations of deployers of high-risk AI systems (referenced here for completeness, not applicable to Curious Endeavor on the present classification). See Reference [1].
  8. AI Act, Article 27 — Fundamental rights impact assessment for deployers of certain high-risk AI systems (not applicable to Curious Endeavor on the present classification). See Reference [1].
  9. AI Act, Article 50 — Transparency obligations for providers and deployers of certain AI systems, including conversational interfaces (Art. 50(1)), emotion-recognition and biometric-categorisation systems (Art. 50(3)), and synthetic or manipulated content and "deep fakes" (Art. 50(4)). See Reference [1]. Applies from 2 August 2026.
  10. AI Act, Article 53 — Obligations for providers of general-purpose AI models. See Reference [1]. Applies from 2 August 2025 under Article 113(b).
  11. AI Act, Article 65 — European Artificial Intelligence Board. See Reference [1].
  12. AI Act, Article 95 — Codes of conduct for voluntary application of specific requirements. See Reference [1].
  13. AI Act, Article 99 — Penalties framework: administrative fines of up to €35,000,000 or 7% of worldwide annual turnover for infringements of Article 5; up to €15,000,000 or 3% for infringements of other obligations; up to €7,500,000 or 1% for the supply of incorrect, incomplete or misleading information to notified bodies or competent authorities. See Reference [1].
  14. AI Act, Article 113 — Entry into force and application. See Reference [1]. The staggered application dates set out in § 15 of this statement are drawn from this provision.
  15. GDPR, Articles 4, 5, 6, 9, 13, 14, 15–22, 28, 32, 35 — referenced in § 11 of this statement for the processing of personal data in AI workflows. See Reference [2].