Artificial Intelligence Policy

This policy explains how Emily Journey & Associates uses artificial intelligence in our business. We use AI to support efficiency and workflow improvement, but human judgment, privacy, accuracy, and accountability remain central to how we work.

Artificial Intelligence Policy

Emily Journey & Associates uses artificial intelligence to support internal efficiency, research, workflow improvement, and selected operational tasks. We do not use AI to replace human judgment, professional responsibility, or client trust. Human review remains part of final decision-making, strategic work, and client-facing deliverables.

Purpose

This policy explains how Emily Journey & Associates approaches the use of artificial intelligence in our business. It reflects our commitment to responsible use, human accountability, privacy, security, and clear professional standards.

We believe AI can support good work. We also believe it creates real risks when used carelessly. Those risks include inaccuracy, bias, privacy concerns, security issues, and trust erosion. Our approach is to use AI with discipline, boundaries, and human oversight.

How We Use AI

We use AI to support internal work such as research, drafting, summarization, workflow support, and efficiency improvements. We also use AI to strengthen selected parts of our operations and service delivery where doing so supports quality, clarity, or consistency.

We do not treat AI output as automatically correct. We do not rely on AI as a substitute for expertise, review, or decision-making. We use it as a support tool within a human-led business.

Human Oversight and Accountability

A person remains responsible for the work.

We do not delegate final judgment to AI systems. Human review and human decision-making remain part of work that affects clients, strategy, communications, compliance, reputation, or business operations.

AI can assist our process. It does not replace responsibility.

Privacy and Accountability

We use AI with care when privacy, confidentiality, or sensitive information is involved.

We take reasonable steps to protect the information entrusted to us. We do not use AI casually with confidential, protected, or sensitive information when doing so would create unnecessary risk. We expect our use of AI tools to align with our privacy obligations, confidentiality standards, and applicable law.

Accuracy, Fairness, and Bias

AI systems can produce inaccurate, incomplete, or biased output. Because of that, we do not assume neutrality or reliability.

We review AI-assisted work for accuracy, context, fairness, and appropriateness. We work to identify distortions, reduce bias, and avoid uses of AI that misrepresent people, organizations, or communities.

Fairness requires human attention. It is not automatic.

Security

We recognize that AI tools can create security risks if they are used without appropriate care.

We take reasonable steps to reduce those risks by using judgment about what information is entered into AI systems, how outputs are handled, and where human review is required. We believe security is part of responsible AI use, not a separate issue.

Transparency

We believe people deserve clarity about how AI is used in professional work.

We aim to be transparent about our use of AI where transparency supports trust, accuracy, or responsible communication. At the same time, we protect confidential information, security practices, and private business operations where disclosure would be inappropriate.

Third-Party Tools and Partners

When we use outside platforms, vendors, or partners that involve AI, we expect reasonable safeguards around privacy, security, reliability, and accountability.

We expect third-party tools and partners to operate in ways that respect the rights of individuals and comply with applicable laws and regulations. We also expect appropriate care in the training, deployment, and use of AI-enabled tools.

Prohibited Uses

Emily Journey & Associates does not support the use of AI in ways that create clear harm, undermine trust, or violate basic rights.

That includes uses such as:

  • Social scoring based on behavior or personal characteristics
  • Emotion recognition used in exploitative or invasive ways
  • Manipulation designed to bypass informed choice or free will
  • AI used to exploit vulnerability based on age, disability, or social or economic circumstance
  • Deceptive or harmful uses that undermine safety, autonomy, dignity, or trust

What This Means in Practice

Our use of AI is guided by a few simple rules:

  • We use AI to support work, not to replace human judgment.
  • We review important outputs before they are used.
  • We protect privacy, confidentiality, and sensitive information.
  • We take bias, fairness, and accuracy seriously.
  • We do not use AI in ways that deceive, manipulate, or create preventable harm.

Our Commitment

At Emily Journey & Associates, we use AI to support thoughtful, accountable work. We do not treat it as a shortcut around expertise, care, or responsibility.

Our goal is to use technology in ways that strengthen our work, protect trust, and serve people well.

Work with a Human, Not a Bot