The Liberated Writer & FFS Media AI policy
As technology advances and authors are faced with more options for tools, as well as more threats to their visibility and IP, we’ve reached a point where transparency and accountability for the digital tools we use has become crucial to maintaining trust and respecting individual consent.
The “AI” policy of the Liberated Writer and FFS Media has been developed from core values, thoughtful consideration, and continued learning about available tools.
These core values arise from my personal values and include:
Integrity
Aligning our thoughts, emotions, and actions with what we profess matters to us. This doesn’t mean perfection but rather adjusting at a realistic pace once we become aware of misalignment.
Privacy
Every individual is entitled to a private inner life and the right to decide who we do and do not share parts of ourselves with. This includes our personal data and information. Without this right to privacy, liberty and intimacy are not possible.
Agency
Remembering our innate ability to shape both the internal and external world through our thoughts, emotions, and actions. Rejecting the cynical idea of “inevitability.” Owning the responsibility that comes with acknowledging our power.
Consent
Understanding that individual sovereignty in relationships requires honesty and transparency from all parties. Taking “no” for an answer and seeking an enthusiastic and informed “yes” from others.
Shared humanity
Acknowledging that every human has equal and innate valuable just by existing. Understanding that the story of humanity is just one among the stories of planet Earth, and committing to harm reduction whenever possible within the complex structures of one’s society.
I vet the adoption of new tech tools through these values before incorporating them into my workflow. Very few have made the cut. I’m not an AI purist or “anti-AI,” however, the decisions and concessions baked into the design of the majority of AI technology are simply not in alignment with my core values.
Committing to values-alignment means that as technology changes, the particulars of this policy may evolve, but new and existing tools will always be evaluated on a basis of integrity, privacy, agency, consent, and shared humanity.
I’ve drawn on my specialized training and expertise regarding fear, attention, and cognitive, emotional, and behavioral patterns to thoughtfully develop my present position on the tools we collectively refer to as “AI.”
I believe that assistive AI (text-to-speech, transcription, closed captioning), agentic AI (Notion AI, OpenClaw, Claude Cowork), and generative AI (LLMs like Claude, ChatGPT, Grok, and image generators like MidJourney, Dall-E, Stable Diffusion) deserve to be discussed separately for the sake of clarity when we talk about “AI,” so I’ve divided those into their own subsections.
Nothing in this policy will come as a surprise to those who understand my commitment to my core values. If you have further questions about how I arrived at these policies, I’m happy to chat and share resources!
In the spirit of the Enneagram, I’ve broken my policy into distinct sections that represent the three instincts: Self-preservation (me), One-on-one (you and me), and Social (all of us). Within each of those sections, I include subsections that discuss each of the three subcategories of “AI.”
Section 1 (Self-Preservation instinct):
What I commit to.
Assistive AI
Closed captions & transcription: For video and audio content produced by FFS Media, I’ll often enable auto-subtitles and closed captioning in the program or platform. This is for accessibility purposes for those who are deaf, hard of hearing, or struggle with auditory processing. Programs/platforms used: Riverside FM, YouTube.
Text-to-speech: For posts on Substack, I’ve enabled text-to-speech for those who may be visually impaired to access the content. I also use the speech-to-text dictation feature on Microsoft Word for dictating outlines, notes, or sometimes first drafts. While I’ve used Claude AI in the past to clean up the dictation without otherwise changing my words, I’ve changed my mind about using Anthropic products and am currently searching for another tool that more closely aligns with my values.
Agentic AI
My commitment to cybersecurity and digital privacy has made me opt out of agentic AI entirely. While I used old-school website integrations that transfer data from one program to another (BookFunnel to MailerLite, say), agentic AI comes into direct conflict with my core value of privacy. Maybe one day I’ll find agentic AI I trust with the deepest level of permissions in each of the tools I grant it access to, but today is not that day.
Generative AI
I have no desire to use generative AI in my business. I explored a few of the tools earlier on, but I encountered too many subtle but alarming cognitive, emotional, and behavioral changes inside of myself to make any increase of “productivity” worth it. Not only that, but I appreciate the strain of creativity, and I like paying humans for their labor.
Some other reasons I don’t engage with generative AI include: alarming environmental impact, cognitive surrender, data collection concerns, labor rights, intellectual property rights, dependency on financially unfeasible technology.
Some programs that I use, like Canva and Notion, offer gen AI capabilities, but I opt out of those whenever I’m aware that’s what I’m being presented.
Section 2 (One-on-One instinct):
What you and I can expect from each other.
Assistive AI
In the rare cases where I may use assistive AI, you will always be asked for consent first. I use Zoom for virtual coaching calls, and it’s unfortunately shifting more heavily toward AI features (currently searching for an alternative, as they are likely to remove the optionality of AI soon). I reject the AI summary option as a default to preserve the privacy of the client. Clients are supplied with the audio and video recordings of the calls upon request, and if they wish to use their own AI to transcribe the downloaded recordings, they are free to do so.
However, I believe that uploading transcripts or recordings of our calls to an LLM with the intent to train a model on my expertise (a “Clairebot” as it were) is a violation of my trust, and a show of disrespect for my labor, hard-earned expertise, and humanity. Plus, it’ll probably spit out bad advice you should never take.
Agentic AI
Since I don’t trust agentic AI with data from my own company, I have no plans of giving it access to anyone else’s data. I expect the same consideration from people I work with. I request that collaborators who use agentic AI in their own business refrain from feeding any of my identifying information into their programs that use agentic AI out of respect for my right to privacy.
Generative AI
In commitment to my core values, I require that those I collaborate with (cover designers, graphic designers, copywriters, editors, or cowriters) disclose all generative AI that they are using in the collaboration process so that I can consent (or not) to my business interests interacting with that material. Collaborators who knowingly fail to disclose this will no longer be welcome to work with me, FFS Media, or the Liberated Writer.
Important distinction
Coaching clients who use agentic or gen AI to assist with their writing and marketing process or for cover design are still welcome to be clients. I’m not interested in policing other people’s independent use, since it’s crucial to a healthy society that different people prioritize different values. Coaching calls are (still) a shame-free zone.
Section 3 (Social instinct):
What we all can expect from each other.
Assistive AI
Transcription & closed captioning will be used on a situational basis for recorded group calls. In the event I intend to create a transcript or automated CC, I will notify all those on the call ahead of recording the call so that they may opt out, change their name on Zoom, or omit other information that they do not wish to be ingested by assistive AI.
It is against this policy to record any group call locally to your computer or cloud storage. Others are unable to offer informed consent about the program you are using and how it manages their data. Individuals have a right to know where their words are being uploaded and for what purposes, hence this policy.
Agentic AI
You do not have permission to upload any information from FFS Media or Liberated Writer communities, courses, or classes into programs that use agentic AI.
Before uploading any data (name, contact info, conversations, etc.) from individuals involved in FFS Media or Liberated Writer events or communities into programs that interact with agentic AI, you must secure written consent from each party involved that they are okay with the agentic AI program you’re using. These things can be an absolute nightmare for digital privacy, especially if the one you’re using is free or cheap.
Generative AI
I do not consent to any use of any of my materials (books, podcasts, blogs, classes, etc) being fed into a genAI model for any purpose.
You also do not have the consent of individuals within the community to feed conversations from group calls, chats, or the Slack group into GenAI, so just don’t do it, no matter how innocent of a reason you have for it. It’s an extractive practice that runs counter to healthy relationship.
Accountability process for violation of policy guidelines
While I won’t be actively policing anyone, if I’m presented with compelling evidence of a violation of these basic practices of privacy and consent, after I have a conversation with you to gain deeper understanding and avoid acting on rumors or miscommunication, you may be removed from the community and all future FFS Media & Liberated Writer communities going forward.
None of my policies are meant to shame you for your individual use. I believe people of integrity can arrive at various use levels of AI. I believe transparency on these issues is essential, even though it may mean others disagree strongly with me, call me names, or even decide to stop working together. That’s show biz, folks.
Despite how many people seem to feel on the internet, AI transparency isn’t a plea deal, so this policy isn’t an attempt to protect myself from blowback. AI is still the Wild West in many legal and ethical ways, and I believe we care for our community through transparency so that each person we interact with can make an informed choice about the relationship.
I’ve developed this policy with the wisdom and inspiration of others who are ahead of me in this process. Thank you to Amelia Hruby and Mel Mitchell-Jackson for inspiring me and leading the way into this wilderness.
If you’re thinking about developing an AI policy for your author business, which I highly recommend because it’s been such a challenging and thought-provoking process for me, check out their guidance through the links in their names.
I consider this a living document and it will change as available tools do. Thank you for reading this far!