All posts
FieldField Engineering

Field notes

What a 400-citizen parallel count taught us

Lessons from the Watch the Votes pilot — EC8A photographs, median consensus, discrepancy handling, and what citizens actually want from an election-day app.

In January we ran a pilot of Watch the Votes with 400 citizen observers across seven wards. The full case study is elsewhere on this site. These are the lessons that did not fit into the case study.

Citizens want to upload the sheet, not fill in a form. The number-one reason an upload was abandoned in testing was because the app asked for too many fields up front. We restructured the flow so the default path is: photograph the EC8A sheet, let the AI extract the tallies, confirm. Everything else is optional. Uploads went up by a factor of three.

Offline matters more than we expected. Polling units in rural wards lose connectivity for hours. The PWA queues uploads in IndexedDB and syncs when a signal returns. We watched one observer photograph fourteen polling units across a morning with no connectivity, then sync the batch at the back of a convenience store that had Wi-Fi. The app never told him anything was wrong.

Trust scoring is a motivator, but not how we expected. We built a reputation tier — a visible score that rises as a user’s uploads converge with consensus and official results. What we did not expect: once a user reached tier two, they went looking for polling units nobody else was covering. The system turned "I photographed my own ward" into "I am curious what is happening in the next one".

Discrepancy handling is the most delicate part. In the one ward where the parallel count diverged from the official result, the app showed a clear side-by-side comparison, a link to the source EC8A sheets, and a route to escalate. What it did not do was editorialise. "Here is what your neighbours uploaded. Here is what the commission published. The difference is 6.4%. Here is who to tell." That was enough.