(for researchers who don’t merely use AI, but work alongside it)
AI 1 is no longer a tool but a cognitive environment. AI is increasingly embedded in everyday research, shaping how problems are explored, results are produced, and work is coordinated. AI is now part of how research is actually done.
This shift changes not only how work is produced, but how thinking unfolds, how collaboration is structured, and how responsibility is assigned. AI compresses exploratory labor, externalizes parts of reasoning, and alters the tempo of intellectual work. These effects are not neutral.
This document takes that shift as a starting point and asks what responsible research looks like once AI becomes part of the environment.
It focuses on exploratory and generative forms of research, where these dynamics are most apparent, and articulates principles for working under these conditions.
Written in 2026, as AI became unavoidable in everyday research.
We believe that AI use can meaningfully expedite research and engineering work, not by replacing expertise, but by expanding how much can be explored before decisions are made.
In practice, AI is used to:
- explore large or unfamiliar problem spaces
- generate and compare alternative explanations, hypotheses, or models
- surface assumptions and stress-test proposed interpretations
- assist with code development, refactoring, testing, and debugging
- summarize, reorganize, and synthesize complex or fragmented material
- support drafting, revision, and clarification of arguments
AI expands what can be examined and iterated on within limited time and attention. It makes certain forms of exploration cheaper and faster, and enables lines of inquiry that would otherwise be impractical.
Used deliberately, AI allows researchers to move faster while looking further. It increases the speed at which ideas can be proposed and challenged, without changing the standards by which claims are ultimately accepted.
Using AI well is not a single action but a process.
Effective use typically involves extended interaction, iterative refinement, and active judgment. It goes beyond issuing a prompt and accepting an output.
In practice, this includes:
- exploring multiple framings of a problem rather than settling on a single answer
- iterating on questions, constraints, and representations as understanding evolves
- actively probing for weaknesses, counterarguments, or failure modes
- comparing alternative outputs and examining where and why they diverge
- revising or discarding AI-generated material when it obscures rather than clarifies
- integrating AI-assisted work with domain knowledge, empirical checks, and independent reasoning
Good AI use is often slower than it appears from the outside.
What distinguishes deliberate use is not the sophistication of prompts, but the quality of judgment applied to what is produced.
Using AI changes the pace at which ideas are generated, tested, and discarded.
This does not simply make existing workflows faster. It alters how much can be explored before committing to a particular direction, how early weaknesses are exposed, and how readily alternative framings can be examined.
Under accelerated conditions, work tends to proceed in shorter cycles of proposal, critique, and revision. More branches are explored, more dead ends are identified earlier, and assumptions are revisited more frequently. This increases the surface area for error, but also raises the likelihood that weak ideas fail quickly and strong ones are stress-tested early.
Acceleration is not a virtue in itself. It is best understood as a way of encountering uncertainty sooner, not avoiding it.
This tempo has human costs. AI systems do not tire, become frustrated, or lose perspective; researchers do. Sustained AI use can exhaust attention, amplify grandiosity or self-critique, and create pressure to respond to every newly generated possibility. Without care, the very abundance that enables exploration can become cognitively and emotionally destabilizing.
For this reason, deliberate pacing, stopping rules, and periods of disengagement are not signs of resistance to AI, but requirements for using it responsibly. Managing energy, attention, and morale is part of the work.
The consequence is a different research posture: generation becomes cheap, revision becomes constant, and judgment becomes the scarce resource.
At the same time, we recognize that AI use introduces new limitations and risks.
AI does not determine what is true, relevant, or important. It does not supply warrant, responsibility, or judgment. It does not eliminate the need for care, interpretation, or disagreement.
AI can amplify error as easily as insight, reinforce unexamined assumptions, and produce outputs that appear coherent without being well-grounded. Increased speed and fluency increase the risk of settling too quickly on answers that sound convincing, even when they haven’t been adequately checked or compared to alternatives.
For these reasons, AI use requires deliberate oversight: regularly pausing to assess what has been produced, checking outputs against external constraints, probing for weaknesses or failure cases, and deciding explicitly when further use no longer adds understanding.
We do not optimize for:
- speed for its own sake
- output volume as a proxy for insight
- impressive AI tricks
- performative cleverness
We do optimize for:
- clarity of thought
- explicit assumptions
- robust doubt
- transferable insights
- mental steadiness in collaboration
AI is not treated as a production engine, but as a pressure vessel for ideas.
Everyone maintains a private AI space. Interaction with AI is inherently personal and context-dependent, functioning much like a notebook: exploratory, provisional, and not intended for direct sharing. These interactions are rarely stable, transferable, or informative outside the situation in which they were produced.
What is shared are consolidated artifacts that make the state of the work legible to others. This includes drafts, figures, slides, code, analyses, and other working documents that show how the problem was framed, which paths were explored or discarded, and where assumptions or uncertainties remain.
Shared work is evaluated by what it makes possible for others: understanding, critique, reuse, and revision.
The guiding rule is simple:
What is shared should support others’ understanding, not document individual effort.
Not everyone uses AI with the same intensity. Not everyone wants to. This is not a problem to be solved.
Intensive AI use does create real advantages: faster iteration, broader search, and easier synthesis. It would be dishonest to deny this. But these advantages concern throughput and exploration, not truth, warrant, or authority.
What matters for collaboration is not how work is produced, but whether its claims can be understood, questioned, and defended by others.
Accordingly:
- no one is judged for using less AI, or none
- no one is pressured to adopt another person’s tools or tempo
- shared work must stand on its own, without reliance on private interactions or inaccessible systems
- evaluation rests on clarity, evidence, and robustness, not pace
Acceleration changes the process, not the fact that we answer to the world.
AI does not author work. People do.
Authorship reflects intellectual responsibility: responsibility for claims made, distinctions drawn, evidence selected, and implications accepted. The use of AI does not alter this responsibility.
Using AI does not mean handing thinking over to a system. Responsibility for judgment and meaning remains human.
All claims, interpretations, results, and code behavior remain the responsibility of the listed authors, regardless of how they were produced.
AI may assist with exploration, synthesis, reformulation, and the development of research outputs across text, analyses, models, and code. Responsibility for correctness, framing, omissions, reproducibility, and consequences cannot be delegated and remains with the authors.
If a result or code artifact cannot be explained, justified, and defended by its authors without appealing to the system that generated it, it is not ready to be shared.
Ethics is not treated as a checklist, but as a continuing practice of questioning.
Relevant questions include:
-
Environmental cost What energy, compute, and material resources does this mode of work consume, and how are those costs distributed across people, institutions, and environments?
-
Work and employment effects Which tasks are automated, reduced, or eliminated by AI use, which new tasks are created, and who loses work, gains work, or takes on additional risk as a result?
-
Concentration of power Who controls the models, data, infrastructure, and terms of access that make this work possible, and how does that concentration shape who can participate, compete, or meaningfully question dominant systems?
-
Evidence and challenge When AI-generated or AI-assisted outputs are used, what counts as acceptable evidence or explanation, and how easily can those results be checked, reproduced, or challenged by others?
-
Implicit assumptions What modeling choices, training data, defaults, or abstractions enter the work without explicit acknowledgment, including biases, value judgments, filtering, or content restrictions, and how accessible are these influences to scrutiny, critique, or revision?
-
Originality and derivation How is the contribution of the work distinguished from recombination or imitation, and how clearly can its sources, transformations, and novel elements be identified and justified?
-
Epistemic risk Where does increased speed encourage premature convergence, automation bias, or erosion of interpretive care?
-
Scientific responsibility At what point does AI use stop supporting understanding and begin to compromise the standards the work is meant to uphold?
These questions have no final answers. They must be revisited as tools, practices, and contexts change.
This way of working is not a requirement for others.
Because methods will differ, collaboration and evaluation cannot depend on shared processes. They depend on what is ultimately put forward for scrutiny.
Accordingly:
- work is judged by its substance, not by the tools used to produce it
- claims are assessed on clarity, evidence, and coherence, independent of workflow differences but not of responsibility
- accountability for correctness and meaning remains unchanged, regardless of method
AI does not remove the fundamental demands of research. It reshapes how effort is distributed, making some forms of work cheaper while placing greater weight on judgment, interpretation, and responsibility.
Understanding still takes time. So does disagreement. So does learning what can and cannot be trusted.
Footnotes
-
Here, “AI” (Artificial Intelligence) refers primarily to contemporary machine-learning systems, especially large-scale generative models (e.g., language, vision, and multimodal models) capable of producing text, code, images, or other structured outputs in response to prompts. More broadly, it also includes the surrounding ecosystem: training data, model architectures, inference infrastructure, fine-tuning methods, evaluation practices, tooling, and organizational workflows that integrate these systems into research and engineering practice. The term is used in the broad, informal sense in which “AI” is commonly used today, rather than as a precise technical or theoretical category. ↩