One of our core principles is simple to state but surprisingly difficult to achieve in practice. We aim to make ecological monitoring as objective as possible. That idea sits at the heart of how we design our workflows, how we collect data, and ultimately why we built the STA logger in the first place.
Ecology, by its nature, has always carried a degree of subjectivity. Field observations are influenced by experience, intuition, environmental conditions, and sometimes even fatigue. That is not a criticism. Many highly skilled ecologists produce excellent work using traditional approaches. But when monitoring becomes the basis for long-term decision making, funding allocation, or regulatory outcomes, subjectivity becomes a limitation. If a method cannot be repeated in a consistent way, it becomes difficult to compare results through time, between practitioners, or across sites.
We saw this challenge clearly during a recent project involving a wetland system with a known population of the threatened Growling Grass Frog (GGF). A previous survey had done a thorough job in confirming presence and identifying a healthy number of individuals. There was no issue with the outcome. The issue was with the pathway taken to get there. The survey involved general searches, meandering through habitat, and opportunistic observations. It worked, but it was not structured in a way that could be repeated with confidence. Details such as the exact path taken, the time spent in specific areas, and the method used to detect individuals were either loosely defined or not recorded at all. In effect, the result depended heavily on the individual conducting the survey.
Our approach to the same problem was deliberately different. Rather than covering ground in a flexible way, we established a series of fixed observation points across the wetland. At each location, we followed a consistent protocol. A timed audio track was used to standardise call playback, ensuring that each survey point was treated the same way, every time. Observations were recorded over a defined period, with clear rules about what constituted a detection.
The key difference was not just in structure, but in how we interpreted the data. Instead of simply counting calls, we recorded the direction of each call relative to the observer. By repeating this process across adjacent points, we were able to triangulate the likely position of individuals. For example, if calls were detected to the right at one point and then split across left and right at the next, we could infer that at least one individual was located between those points. This allowed us to build a spatially explicit estimate of abundance, rather than a simple tally.
This method does not attempt to capture every individual. In fact, it intentionally underestimates total population by focusing on consistent detection criteria. The outcome is a minimum abundance that can be directly compared over time. That consistency is far more valuable than a higher but less reliable number. During the survey, we also recorded opportunistic visual observations, but these were treated as supplementary rather than core data.
The broader monitoring program combined this structured survey approach with other objective datasets, including high accuracy GNSS mapping and repeatable remote sensing analysis of habitat change . Each component was designed to reduce ambiguity and improve comparability through time.
This is ultimately what we mean by objective ecological monitoring. It is not about removing the ecologist from the process, but about designing systems where the result does not depend on who happens to be in the field on a given day. It is about creating methods that can be repeated, tested, and trusted.
That shift matters. Because when decisions rely on data, the quality of those decisions is only as strong as the consistency of the data behind them.
