This essay is based on a conversation with Aaron Sneed, a 40-year-old founder of a defense technology company based in Florida. The following text has been edited for length and clarity.
When I started my business as a solopreneur, I realized I didn’t have the money to pay lawyers, human resources representatives, and a number of other businesses. So, using AI, I created something I call “The Council.”
The council, made up of all AI agents, helps me save about 20 hours per week – and that’s a very conservative estimate. All types of general corporate, human resources, legal and financial AI agents have a seat on the council. In total, I use 15 custom agents, including a Chief of Staff agent, to manage my workload.
Read too
I have been using automated tools for at least a decade
I have been working on autonomous platforms that make decisions independently for at least 10 years. That’s why I quickly became excited about commercial large language models and AI tools when they came onto the market.
I primarily use Nvidia’s platform as base hardware for technical prototypes and experiments. I use their GPUs, and since I purchased their hardware, they offer me free access to their AI software. Additionally, my advice is based on OpenAI’s ChatGPT business platform, which uses custom GPTs and projects.
Overall, my AI council consists of the following members:
- Chief of Staff Agent
- HR agent
- financial agent
- Accounting agent
- Legal, communications and PR agent
- Security and Compliance Agent
- Technical employee
- Quality representative
- Supply Chain Agent
- Training Officer
- Manufacturing agent
- Business Systems Agent
- Facilities Agent
- Field service agent
- IT and data agent
Each employee has different powers
My Chief of Staff agent is important because he is the voice that sets priorities based on parameters such as risks, problems and opportunities.
I have communicated to my chief of staff which models take priority when making decisions. For example, all legal, compliance or security-related matters are given higher priority. Therefore, I am directing the Chief of Staff to prioritize these models over all others.
Read too
I trained my AI agents to resist and not just say “yes.”
I don’t want a group of yes-men. I consciously trained them to disagree with me because I learned that they naturally want to agree with me. I want you to test my theories to help me achieve my goals.
So I set up a round table with all my AI agents where I can, for example, put a document with a tender into the chat and all agents give their opinion on it at the same time. I use this round table as a preventative measure against hallucinations and gaps in knowledge.
The training never really stops because if I don’t continually train the models, I don’t get the results I want or need. It takes me about two weeks to train my agents to the level of experience they need so that I can trust them. At the beginning it took me longer to get a result than if I had just done it myself because I wasn’t concentrating properly on the training.
By training my AI agents, I have become a better prompter
The models have gotten better, and so have my prompting skills. I have a better understanding of what information should be included in an agent, for example a governance structure for priorities. I have a series of files that implement these requirements to minimize the risk of hallucinations and incorrect or bad information.
All AI companies have different prompt engineering guides. I recommend taking the time to go through these as there are a lot of user errors that slow down working with AI.
It takes time for agents to function well. Many companies will try to use AI too quickly and too much without understanding how to use it properly, and these companies could harm themselves in the long run.
Read too
AI has replaced roles – but not human judgment
I am not sufficiently qualified for many of these tasks and responsibilities, but I am also forced to take them on because I am self-funded.
Especially with my legal representative, I learned where the limits of the practical application of AI tools lie. I have a lawyer and use my legal representative to do some preliminary work before giving my lawyer documents for a patent, litigation, or something similar.
When I was training my model to help me use facts and data to build a case, I had put together a lot of information and thought that what my legal representative had created sounded good to me as a non-lawyer. Then I presented all this information to my lawyer and he said that it was technically and factually correct, but that we did not want to reveal this information because it would expose our cards.
His legal knowledge made me realize that, although in my opinion my agent was correct and ideal, he could not replace a lawyer with his human context, experience and skills.
Read too
Ideally, I would have an HR manager, a legal advisor, etc. – and each would have their own AI agent to assist them. This is how I imagine the future.

