Tools and Reviews

I tested ChatGPT for writing site reports — here's what happened

A two-week experiment with using ChatGPT to draft daily site reports. The results were better than expected, with some sharp caveats.

Lotfy20 April 20263 min read
Site supervisor reviewing notes on a clipboard
On this page

Daily site reports are one of those tasks that everyone in contracting knows takes too long. They're not hard to write, exactly, but they happen at the end of the day when energy is lowest, and they have to be written carefully because someone — a client, a consultant, a future arbitrator — might read them years later.

For two weeks, I ran an experiment: I drafted my site reports with ChatGPT instead of writing them by hand, then compared the time, the quality, and the surprises. Here's what I found.

The setup

The format I used was simple. At the end of each site visit, I'd dictate a stream of bullet points into my phone — what was done, what was missing, who was on site, what was discussed, any issues. Then I'd paste those bullets into ChatGPT with a prompt that gave it:

  • The project name and stage
  • A description of the report's intended audience
  • An example of the formal tone we use with this client
  • A reminder to flag anything that looked like a contractual issue separately

The output was a draft I would then edit. Edits typically took five to ten minutes. The whole process — bullets to finished report — went from roughly 35 minutes by hand to roughly 12 minutes with AI assistance.

What it did well

Tone consistency. Once I gave it the example, the model held the formal-but-not-stiff tone that this particular client expects. Better than I'd manage at 6pm on a Thursday, honestly.

Structuring. It reliably grouped progress, issues, manpower, and weather into the same sections in the same order, every time. My handwritten reports drift on this; the AI ones don't.

Catching omissions. Twice in the two weeks, the model asked clarifying questions before producing a draft — once about whether a delay was the contractor's or the client's fault, once about whether a specific subcontractor was on site or not. Both questions caught real ambiguities in my notes that would have produced sloppy reports.

Sentence-level polish. Subjectively, the prose was tighter. Run-on sentences and minor grammar slips that creep into a tired-end-of-day report were absent.

What it did poorly

Numbers. The model occasionally invented or changed figures. It once turned "around 22 workers" into "approximately 25 personnel" in a way that read fine but was just wrong. After that I started double-checking every number against my notes before sending.

Severity calibration. A real safety incident and a minor housekeeping issue got the same calm tone unless I explicitly flagged one as serious. The first time this happened I caught it; the next site report writer in your team might not. Always tell the model which items are sensitive.

Contractual phrasing. When my notes hinted at a possible variation or claim, the model would sometimes write it up neutrally — which sounds fine but is exactly the wrong move. In contracting, ambiguous language about delays, scope changes, or extra works is what gets you in trouble three years later. I started keeping a separate "contractual notes" section that the AI didn't touch.

Repetition across days. Reports for similar days started to feel templated. By day eight, two consecutive reports had nearly identical opening paragraphs. A small thing, but the kind of small thing a sharp consultant will notice.

The data question

Before I started, I went through my own checklist: were these reports mine to share with a third party?

Two of the three projects: yes, with caveats — the client's master agreement allowed reasonable processing tools, and I anonymised any subcontractor names that weren't already public.

One of the three: no. The NDA was strict. So I wrote those reports the old way and left them out of the experiment.

Be honest with yourself about which of your documents are actually yours to paste.

The verdict

For drafting daily site reports, ChatGPT (and equivalents) saves real, meaningful time — about 60% in my measurement. The quality of the output is at or above what a tired person would produce by hand.

It's not a "press a button and trust it" workflow. The numbers need checking. The contractual sensitivity needs human judgement. The data permission needs a clear answer up front.

But used as a fast first-drafting partner, with you as the editor and final author, it's worth the setup time. I've kept it in my routine after the experiment ended.

The next experiment, coming soon: using AI to read tender documents. That one is going to be more interesting.

Lotfy

Engineer · Contracting · Riyadh, KSA

Share