Don’t Build on Hype: Ten Steps to Ground Your AI Policy in Reality

May 15, 2025 in ,
By Jacob Flinck and Colleen Valerio

Turns out the movies were right; artificial intelligence (AI) is rapidly integrating into our workplaces and soon we will all be reporting to them. Kidding, mostly. In this blog, we will leave the debate about robot conquest to sci-fi films and Reddit threads, and instead we will continue our focus on strategically utilizing AI within your organization.

Artificial Intelligence, AI Learning, Machine learning, Internet technology networking concept stock photo.

While the potential uses for AI continue to grow at a seemingly infinite pace, organizations must ensure they don’t fall prey to the “ohhh-look-shiny-object” hype that can lead organizations to adopt technology without fully considering the implications. In our first blog post, AI in the Workplace: A Peek into FMP’s Policy, we explored the importance of creating AI policies that empower employees to harness the potential of AI tools, while safeguarding sensitive client data, protecting intellectual property, and ensuring compliance with federal regulations. Now that we’ve covered the why, we’re going to dive into the how and discuss the framework we followed to create and implement our policies.

Developing an effective AI policy isn’t just about restricting usage, it’s about enabling responsible innovation at the individual level while mitigating organizational risks. Crafting any policy is a thoughtful process, but when it comes to something as dynamic and fast evolving as AI, it requires stepping back from some of the hype, engaging in critical thinking, and building a framework grounded in organization-specific needs and values. In this post, we outline the 10 key steps your organization should follow to create a solid AI usage policy. Each step includes a few critical questions that help ensure your AI policy aligns with your strategic goals, organizational values, and legal obligations.

    Start with the “why.”Before drafting any policy, understand why you’re considering AI and what you realistically hope to achieve. This initial step is crucial for both setting a strategic focus, and ensuring employee buy-in. Humans inherently seek meaning. When employees understand why something is being done, they’re more engaged and less resistant (as resistance is often rooted in fear or confusion). Starting with the why also helps prevent shiny-object syndrome, as it delves deeper than fleeting tech trends or surface-level solutions; it aligns AI tools with organizational values and ensures decisions are cohesive, not reactive or fragmented.

    Key Questions to Ask: What is the scope of this policy (e.g. Generative AI, all Machine Learning, specific platforms) Who does it apply to (e.g. all employees, specific teams, customers, contractors, vendors)? Will we primarily rely on publicly available AI models, use enterprise versions, build/fine-tune our own internal models, or use vendor-specific closed systems (This choice heavily influences security, data privacy, and usage guidelines)?

      Since the impact of adopting AI for an organization will be felt by employees across the organization, AI Policy development shouldn’t happen in a silo. Involving a variety of perspectives helps identify shortfalls and builds broader buy-in, mitigating the risk of creating policies with limited applicability.

      Key Questions to Ask: Who will use AI?How will jobs change with AI? What jobs might change due to AI?How will we ensure all relevant voices and perspectives are heard?

        Ignoring the legal landscape is a recipe for disaster. AI development and regulation are evolving rapidly, and compliance is paramount. Understanding requirements is about both adapting to current laws and also anticipating future directions.

        Key Questions to Ask: What specific laws (e.g. HIPPA, GDPR, CCPA, emerging AI Acts) apply based on the customer base and the data we process?What are the data privacy implications (e.g. collection, consent, usage, deletion)? How will we ensure compliance?Are there industry-specific regulations or standards we must adhere to?What do our existing contracts (e.g. client, customer, vendor, employee) say about data usage, confidentiality, and intellectual property in the context of AI? Do they need revision?

          This is where the rubber meets the road. Where policy meets practice. AI policy guidelines must be clear, practical, and strike a delicate balance between enabling innovation while ensuring employees are working within clear boundaries. There is a thin line between setting clear guidance and instituting overly complex rules which can lead to employees developing workarounds or disengaging out of fear of making an error.

          Key Questions to Ask: What are clearly acceptable uses of approved AI tools (e.g. drafting emails, summarizing research, analyzing anonymized data, coding assistance)?What uses are strictly prohibited (e.g. inputting PII/client confidential data into public AI, making critical decisions without human oversight)?What are the expectations around transparency (e.g. disclosing when content is significantly AI-generated)?How do these guidelines differ if internal/closed AI or public AI tools are used?

            AI tools, especially public ones, can pose significant security and privacy risks if used improperly. Protecting sensitive data is critical. Don’t assume vendor security is sufficient; ensure you are defining your own safeguards.

            Key Questions to Ask: What are the security protocols for accessing and using approved AI tools?What are the procedures for reporting suspected data leaks or security incidents related to AI use?If developing internal AI, how do we ensure models are secure and protected?

              A policy without clear ownership can create serious issues for the company. Ambiguity within policy can lead to inaction or a room full of leaders looking at one another assuming it was someone else’s responsibility. In this instance, utilizing a Responsible, Accountable, Consulted, and Informed (RACI) matrix helps establish clear swim lanes and ensure nothing falls through the cracks.

              Key Questions to Ask: Who is ultimately accountable for the AI policy’s success and enforcement?Who is responsible for specific tasks like training, vetting appropriate tools, monitoring usage, responding to incidents, and updating policy?Who needs to be consulted before decisions are made? Who needs to be informed?Who provides oversight for ethical considerations?

                Translate the decisions made during the previous steps into clear, accessible documentation. Avoid dense legal-speak that can be difficult to digest and discourage compliance. The goals here are comprehension and compliance.

                Key Questions to Ask: Is the policy written in clear, concise language that all employees can understand?Does it clearly state the purpose, scope, guidelines, and consequences of non-compliance?Does it align with the organization’s culture and values?Has it been reviewed by legal counsel?

                  Rolling out a new policy requires more than just an email. This is especially true for new technology, as an employee’s understanding of, and experience with, AI can vary dramatically. Overcoming that fear or skepticism requires clear communication and ongoing support. Addressing the human element in explaining the “why” along with providing practical training can help manage the change effectively. (Stay tuned, we’ll be expanding on this topic more in our next blog!)

                  Key Questions to Ask: What is our communication plan?How do our current employees understand AI? Where may we meet resistance? How will we address questions and concerns?What training methods will be most effective?How can we support ongoing training?How will we reinforce the policy over time?

                    Ongoing compliance monitoring ensures your policy is working as intended and identifies areas for improvement. This requires actively looking for both successes and problems and building in checks to ensure confirmation bias is avoided.

                    Key Questions to Ask: What metrics will we use to track policy compliance, AI tool usage, and the impact of AI on objectives?What mechanisms are in place for employees to provide feedback or report issues related to AI use or the policy itself?How will we evaluate if the AI tools are delivering the expected value and identify any unintended negative consequences?What is the process for investigating and responding to policy violations or AI-related incidents?

                      Unlike Ron Popeil’s infamous Rotisserie oven, your AI policy is not built to just “set it and forget it.” (Not sure if we should pretend AI gave us that line, or we should be honest and take credit for it. It’s bad, we know.) The AI landscape is changing incredibly fast, and your policy needs to be able to strategically adapt. Build in a regular review cadence to ensure it remains relevant, effective, and compliant.

                      Key Questions to Ask: How often will the AI policy be formally reviewed (quarterly, semi-annually)? What external triggers prompt immediate review (new major regulations, significant AI technology shifts, major security incidents)? How will insights from monitoring (Step 9) and stakeholder feedback inform policy revisions?

                      Building a solid AI policy requires an understanding of AI’s potential partnered with a healthy commitment to grounding decisions, and not getting lost in the hype. While these steps may seem like a lot, they can be tailored to fit the unique needs of your organization and can be done in parallel to streamline the process. By getting started now, and asking these questions at each stage, you’ll be better equipped to guide your team in leveraging AI responsibly and ensuring that your organization stays ahead of potential risks. With the help of these ten steps, you can develop a framework that fosters an environment where AI enhances, rather than disrupts, your work.


                      Image of Jacob Flinck

                      Jacob Flinck is a Managing Consultant with a focus on strategic communications and change management, learning and development, and organization effectiveness. Jacob co-lead our Strategic Communications Community of Practice (CoP), leads internal special projects focused on IT, security, and operations – including AI – as well as works to expand our intelligence community (IC) practice. Jacob is a Prosci Certified Change Practitioner. For fun, he likes to cook, bake, read, travel, and jump on his bike for some Peloton classes. 

                      Picture of Colleen Valerio.

                      Colleen Valerio is a Senior Consultant with a focus on workforce development, organizational effectiveness, and technological integration. Outside of work you will likely find her racing her 4-year-old son up a tree, planning her upcoming snowboarding trip, or knee-deep in a home renovation project that is taking 200% longer than planned.