Artificial Intelligence (AI) is an increasingly important tool in the marketing landscape, offering unprecedented possibilities for content creation and customer engagement.
This AI usage policy is designed to guide our team in the responsible, transparent, and ethical use of AI in their work. The aim of this policy is not to hinder creativity or innovation, but rather to ensure that our use of AI aligns with our overall corporate values and respects our customers’ rights.
Guidelines for Responsible AI Usage Transparency
We must remain transparent about our use of AI. This includes acknowledging when AI has been used to create or modify content. This can be through a blanket statement on ourwebsite or integrated into contracts with clients.
AI tools are leveraged to review grammar and check spelling in written content such as website text, blogs, and content offers. At Snyder Group, we have access to a range of AI programs designed for assistive purposes. However, all of our content is meticulously crafted and scrutinized by individuals who possess a deep understanding of the inherent limitations of AI. This ensures that biases are prevented and ethical marketing practices are upheld, maintaining the highest standards of quality and credibility.
The following AI tools have been approved for use in our company:
- HubSpot AI
- Google AI Platform
- ChatGPT (OpenAI)
- Adobe Sensei
- Salesforce Einstein
- IBM Watson Marketing
- Sprout Social
AI acts as an assistant, complementing creativity and good judgment instead of replacing them. At our company, we strictly enforce a policy against publishing AI-generated content without human development and quality assurance review. We have a team of skilled writers who collaborate on content creation and revision, ensuring accuracy and upholding high quality standards. To monitor our AI tools’ performance, we regularly evaluate customer feedback, analyze tool-generated data, and establish guidelines for responsible and ethical AI usage aligned with our company values. We also provide training to employees, ensuring everyone is accountable for crafting top-notch content. Additionally, in case of any negative outcomes from AI-assisted content, we must take responsibility and remediate as necessary.
Use Cases That Should Not Leverage AI
While there are many positive use cases of AI assistance in our work, there are specific types of work in which we have decided as a company to restrict the use of AI. Do not use AI for the following:
- Discrimination and Bias
- Surveillance and Privacy
- Critical Decision-Making
- Deepfakes and Misinformation
- Emotion Manipulation
- Unsolicited Communication
- Unethical Content Creation
- Invasive Advertising
- Sensitive Data Analysis
It’s essential to carefully evaluate the potential risks and ethical considerations associated with AI usage in various contexts and create clear guidelines and restrictions to ensure responsible and ethical AI deployment.
Addressing Specific Issues
Bias & Ethical Considerations
We recognize that AI systems learn from the data they receive, which can unintentionally perpetuate biases present in the training material. While many language models have filters to reduce bias or harmful outputs, relying solely on filters is insufficient. Therefore, we prioritize content created by human authors, backed by a team of editors who carefully review it for potential bias and refine it to be inclusive and accessible. We firmly adhere to the principle that AI should not be used to deceive or manipulate customers, and any content assisted by AI has been produced ethically and in alignment with our corporate values. It undergoes a rigorous review process to rectify biases, inaccuracies, and other potential pitfalls.
Privacy & Security
At our company, ensuring customer privacy is our foremost priority. We utilize tools with robust privacy policies and take extra precautions by refraining from uploading customer data into AI tools or language models. Additionally, we meticulously vet programs to safeguard our intellectual property (IP). We understand the potential cyber-attacks on AI systems, and our partnership with platforms that prioritize personal security allows us to protect our valuable data and IP. By doing so, we prevent its exploitation for training publicly accessible language models.
It is our policy that employees should not use AI to impersonate any person without their express permission. AI can allow you to create “in the style” of public figures; as a policy, we do not do that in our company. Designated employees may, with permission and review, use AI to mimic the writing style of a current Snyder Group Inc. employee for the purposes of ghostwriting or editing content from that individual.
Training Employees on AI Usage
All employees involved in creating content with AI should receive appropriate training. This should cover both the technical aspects of using AI, and the ethical considerations outlined in this policy.
Best Practices for Implementation
To practically implement this policy, always follow these steps:
- Understand the AI systems, including how it works and its potential limitations.
- Ensure that every new hire and existing employee you manage has read this policy.
- For specific tools, document or use materials from the company to document its functionality, limitations, and our company standards for using the technology.
- Continually update your knowledge and training as AI technology evolves.
By using AI in your work, you agree to comply with this policy. Non-compliance will be taken seriously and could lead to disciplinary action or employment termination. Remember, the goal of this policy is not to restrict creativity, but to ensure that we use AI responsibly and ethically. By following these guidelines, we can harness the power of AI while respecting our customers and upholding our company values