Your new teammate is a machine. Are you ready?

Wait 5 sec.

Companies across various industries are investing heavily in AI to enhance employee productivity. A leader at the consulting firm McKinsey says he envisions an AI agent for every human employee. Soon, a factory manager will oversee a production line where human workers and intelligent robots seamlessly develop new products. A financial analyst will partner with an AI data analyst to uncover market trends. A surgeon will guide a robotic system with microscopic precision, while an AI teammate monitors the operation for potential complications.These scenarios represent the forefront of human-machine collaboration, a significant shift that is quickly moving from research labs into every critical sector of our society.In short, we are on the verge of deploying AI not just as a tool, but as an active partner in our most important work. The potential is clear: If we effectively combine the computational power of AI with the intuition, creativity, and ethical judgment of a human, the team will achieve more than either could alone.But we aren’t prepared to harness this potential. The biggest risk is what’s called “automation bias.” Humans tend to over-rely on automated systems — but, worse, also favor its suggestions while ignoring correct contradictory information. Automation bias can lead to critical errors of commission (acting on flawed advice) and omission (failing to act when a system misses something), particularly in high-stakes environments. Even improved proficiency with AI doesn’t reliably mitigate the automation bias. For example, a study of the effectiveness of Clinical Decision Support Systems in health care found that individuals with moderate AI knowledge were the most over-reliant. Both novices and experts showed more calibrated trust.  What did lead to lower rates of automation bias was making study participants accountable for either their overall performance or their decision accuracy.This leads to the most pressing question for every leader: When the AI-human team fails, who will be held accountable? If an AI-managed power grid fails or a logistics algorithm creates a supply chain catastrophe, who is responsible? Today our legal and ethical frameworks are built around human intent, creating a “responsibility gap” when an AI system causes harm. This leads to significant legal, financial, and reputational risks.First, it produces a legal vacuum. Traditional liability models are designed to assign fault to a human agent with intent and control. But the AI is not a moral agent and its human operators or programmers may lack sufficient control over its emergent, learned behaviors, so it becomes near impossible to assign blame to any individual. This leaves the organization that deployed the technology as the primary target of lawsuits, potentially liable for damages it could neither predict nor directly control. Second, this ambiguity around responsibility cripples an organization’s ability to respond effectively. The “black box” nature of many complex AI systems means that even after a catastrophic failure, it may be impossible to determine the root cause. This prevents the organization from fixing the underlying problem, leaving it vulnerable to repeated incidents, and undermines public trust by making it appear unaccountable. Finally, it invites regulatory backlash. In the absence of a clear chain of command and accountability, industry regulators are more likely to impose broad, restrictive, stifling innovation and creating significant compliance burdens. The gaps in liability frameworks were laid bare after a 2018 fatal accident involving an Uber self-driving car. Debate arose over whether Uber, the system manufacturer, or the human safety driver was at fault. The case ended five years later with “the person sitting behind the wheel” pleading guilty to an endangerment charge, even as the automated driving system failed to identify the person with a bike and brake.Such ambiguities complicate the implementation of human-machine teams. Research reflects this tension, with one study finding that while most C-suite leaders believe the responsibility gap is a serious challenge, 72% admit they do not have an AI policy in place to guide responsible use.This isn’t a problem that Washington or Silicon Valley alone can solve. Leaders in any organization, whether public or private, can take steps to de-risk and maximize their return on investment. Here are three practical actions every leader can take to prepare their teams for this new reality. Start with responsibility. Appoint a senior executive responsible for the ethical implementation of AI-enabled machines in your organization. Each AI system must have a documented human owner—not a committee—who is accountable for its performance and failures. This ensures clarity from the start. Require your teams to define the level of human oversight for each AI-driven task, deciding whether a human needs to be “in the loop” (approving decisions) or “on the loop” (supervising and able to intervene). Accountability should be the first step, not an afterthought.Onboard AI like a new hire.  Train your staff not only on how to use AI but also on how it thinks, its limitations, and potential failure points. The aim is to build calibrated trust, not blind trust. Approach AI integration with the same thoroughness as onboarding a new employee. Begin with less critical tasks to help your team understand the AI’s strengths and weaknesses. Establish feedback channels so that human team members can help improve AI. When AI is treated as a teammate, it is more likely to become one.Integrating AI as a teammate in our work is inevitable, but ensuring success and safety requires proactive leadership. Leaders who establish clear accountability, invest in comprehensive training, and prioritize fairness will thrive. Those who treat AI as just another tool will face the consequences. Our new machine teammates are here; it’s time to lead them effectively.The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.This story was originally featured on Fortune.com