AI in the Workplace: A Peek into FMP’s Policy

If you haven’t noticed lately, AI has been in the news. Touted for its ability to increase effectiveness, AI can streamline your operations by automating repetitive tasks, improving productivity, and freeing up employees to focus on higher-value work. This not only saves time, but can reduce the risk of human error. In fact, the interview questions I use later in this blog were generated with the help of AI; it gave me some ideas to start with that I then tailored to better align with this topic.

Before we go any further, it is important to note that, even with all the hype, AI has its downsides. It can make mistakes and provide inaccurate information–you should always fact check the information it provides. You also need to be careful with the information you enter if you are using a publicly available AI tool (e.g., ChatGPT). If you provide client or business confidential information, this becomes part of the data that the AI tool learns, which can in turn be used to answer others’ questions. To mitigate this risk, it is important for organizations to provide their employees with clear policies and guidelines on the responsible use of AI technology. Of note, if an organization has an internal AI tool (that sits within their own IT environment and leverages only their data), a different set of rules and policies would apply. In this conversation, we’re focusing on publicly available AI tools.

At FMP, we developed AI-specific policies to help our employees leverage the power of publicly available tools while safeguarding client data and intellectual property and ensuring our compliance with federal regulations. These policies ensure we maintain our professional integrity, offering clients expertly developed deliverables and services that are enhanced by advanced technologies but driven by our knowledge, experience, and expertise.

Our policies include what employees can and cannot use publicly available AI tools for. Employees may use AI for brainstorming, researching best practices, suggesting outlines, or drafting communications. It can also be used for fact-checking or summarizing publicly available information, identifying code for data analysis (without inputting client data), and informing content creation, if it is tailored, validated, and marked as AI-generated. Employees may not input corporate information or client provided data/materials into AI tools. AI-generated content for deliverables, proposals, or thought leadership must be significantly tailored to avoid plagiarism, thoroughly checked for accuracy, and properly acknowledged as being AI-generated (my note in the first paragraph is an example of this in action).

I also sat down with Scott Burba, one of FMP’s IT gurus, to get his thoughts on the importance of having an AI policy in place.

Scott: First, nothing is simple with AI, and it will not get easier given that every company seems to be working to engage AI internally or in their offerings. But just like any new technology, providing an AI “use” policy will help your organization balance security and compliance risk with increased efficiency and productivity in all aspects of operations.  Your AI use policy not only needs to give the “dos and don’ts,” it should also provide resources for your staff to understand what is behind the curtain so to speak, so they can understand its limitations.

Scott: The biggest challenge related to AI use is exposing company information publicly via inputting the information through prompts. Once you enter the information in a public AI, you can’t recall it (yet). Staying abreast of our contract compliance requirements related to AI use is another challenge. There is no “one policy to rule them all” and likely never will be, so you will need to stay informed of these requirements. Addressing these challenges will require continuous education, monitoring AI use in your environment, and staying engaged with what is happening in the real world. All tasks that, ironically, AI will likely help you complete.

Scott: It is important to focus on education and providing information to your employees about how AI gets its “knowledge” – and thus can introduce errors, bias, etc. into its responses. At the root of it, understanding where the AI models are getting their data plays a big part in the trustworthiness of the information it delivers. Beyond education, you should look to the same best practices you probably have applied to policies around any new tool: establishing governance, understanding compliance requirements, building audit mechanisms, and documentation. AI is new, big, and growing – but fundamentally it is a tool (just like all the technologies over the past decades) that we need to methodically address like all other tools.


Jacob Flinck is a Managing Consultant with a focus on learning and development, organization development, and communications and change management. Jacob currently leads internal special projects focused on IT and operations as well as works to expand our intelligence community (IC) practice. Jacob is a Prosci Certified Change Practitioner and a Certified Scrum Master (CSM). For fun, he likes to cook, bake, read, travel, and jump on his bike for some Peloton spin classes.