top of page

A 6-Week Module for Living with Nonhuman Entities

  • Writer: Ryan Bince
    Ryan Bince
  • Oct 3
  • 4 min read

In the summer of 2022, I led a small seminar at Tidelines Institute in Gustavus, Alaska, examining a question that has become urgent for organizations deploying AI systems: what distinguishes human activities from other forms of agency, and how should those distinctions guide decisions about managing a mixture of human and nonhuman agents?


ree

The course traced Western thought from ancient Greek political theory to contemporary ecology, developing conceptual frameworks for navigating a future where the boundaries between human and non-human agency are increasingly blurred.


The course's final assignment crystallized its practical application. Students wrote "Statements of Intention" articulating how they wanted to relate to non-human entities in their lives—a deliberate exercise in defining values and boundaries before external pressure forces reactive decisions.


This mirrors exactly what organizations need today: proactive frameworks for human-AI collaboration that reflect considered principles rather than defaults shaped purely by technological capability and competitive pressure.


Labor, Work, and Action in the Age of Automation

The course began by distinguishing between forms of human activity. The framework is remarkably useful for considerations in AI Governance. These three forms of human activity layer upon one another as human society becomes more sophisticated:


  1. Labor: Repetitive activity that is necessary to sustain biological function: acquiring food, shelter, and protecting against illness. This took up the bulk of energy in early human societies.

  2. Work: The production and maintenance of enduring objects and systems, including tools, that is necessary for easing the difficulty inherent to labor.

  3. Action: The creative force by which new things enter the world, which is distinct from the default state of matter and most life forms: motion.


These distinctions provide a vocabulary for thinking strategically about automation. Organizations deploying AI must decide which activities to automate, which to augment, and which to preserve as fundamentally human regardless of efficiency gains.


The course framework suggests that while routine labor may be efficiently automated, and certain work processes productively augmented, the creative action of humans in any organization may be difficult to automate.


For now, at least, collective sense-making and collaborative innovation require forms of human participation that cannot be simply replaced.


Emergent Coordination and Unintentional Design

The course examined ecosystems where value emerges not from centralized planning but from overlapping activities that create capabilities no single agent could produce alone. These patterns of "unintentional design" offer models for human-AI partnerships that go beyond simple task division.


Rather than treating AI deployment as a zero-sum replacement dynamic, organizations might cultivate conditions where human judgment and machine processing create emergent capabilities through coordination rather than competition.


This reframing has practical implications for implementation strategy. Instead of asking "which jobs can AI replace," organizations might ask "what new forms of work become possible when humans no longer spend cognitive resources on routine pattern recognition?"


The course's ecological perspective suggests that the most productive human-machine systems may be those designed for generative interaction rather than efficient substitution.


Culture, Communication, and Collective Intelligence

The course explored evidence that non-human animals engage in complex cultural transmission, collaborative problem-solving, and social coordination—capacities previously assumed to be exclusively human. This research challenges organizations to reconsider which capabilities actually require human participation and which assumptions about "uniquely human" skills may simply reflect narrow definitions.


For AI deployment, this matters because it shifts the question from "can machines do this" to "what forms of participation create value beyond task completion?"


Organizations investing in AI systems discover that certain activities—building institutional memory, maintaining stakeholder relationships, exercising contextual judgment in ambiguous situations—involve dimensions of participation that automated systems can assist with but not fully replicate.


The course framework helps distinguish between cognitive capabilities that machines increasingly match and forms of embedded, relational engagement that remain distinctly valuable.


Reciprocity, Responsibility, and Care

The course engaged frameworks emphasizing reciprocity and regeneration in relationships across difference—whether between species, between humans and ecosystems, or between different forms of intelligence.


These models challenge extractive logics that treat relationships as purely transactional, suggesting instead that sustainable systems depend on mutual care and long-term investment in collective capacity.


Applied to organizational AI strategy, this perspective reframes implementation away from pure efficiency optimization toward building systems that enhance human capability while maintaining meaningful human agency and accountability.


Rather than asking "how can we minimize human involvement," organizations might ask "how can we design systems where humans remain responsible stewards of outcomes with genuine authority to intervene, override, and redirect automated processes?"


Disturbance as Transformation

The course examined how ecosystems respond to disruption not by returning to previous states but by generating novel assemblages through unexpected collaboration. This offers a productive alternative to narratives of AI deployment as either preserving the status quo or destroying existing structures.


Organizations might instead cultivate conditions for generative recombination—experimenting with new configurations of human and machine capabilities that emerge through practice rather than comprehensive advance planning.


This requires tolerance for uncertainty and willingness to adapt organizational structures as new possibilities reveal themselves.


The course emphasized that transformation is not a discrete event but an ongoing process of adjustment as capabilities evolve and relationships develop.


Organizations implementing AI systems face similar dynamics—early deployment decisions create path dependencies, but productive adaptation requires ongoing reassessment of which human roles remain valuable and which automated processes require human oversight.


Defining Boundaries Before They Define You

The course's pedagogical approach—asking students to articulate their values and commitments before technological change forced reactive adaptation—translates directly to organizational AI governance.


Companies establishing AI ethics committees, developing responsible AI principles, and creating oversight structures are engaging in exactly this exercise: defining boundaries and commitments proactively while they still have the agency to make implementation choices.


The value of this exploration lies not in resolving every tension but in developing conceptual clarity about what organizations want to preserve, what they're willing to transform, and what principles should guide those decisions when efficiency and values conflict.


As AI capabilities expand and competitive pressure intensifies, organizations with well-articulated frameworks for human-machine collaboration will be better positioned to maintain strategic coherence, regulatory compliance, and social legitimacy. The course provided exactly this: structured practice in thinking through what makes human participation irreplaceable before external forces dictate suboptimal defaults.

Comments


  • LinkedIn
  • TikTok
  • Youtube
  • X
bottom of page