FAQ Report Card

Home / Report Card / FAQ Report Card

Report Card questions

What are the report cards based upon?

The Healthy Rivers to Reef Partnership’s Technical Working Group (TWG) selected priority aquatic ecosystem indicators that are suitable for measuring waterway health. Each indicator has a set of benchmarks representing a standardised scale from very good to very poor. The benchmarks are compared to data using formulas and then weighted and averaged to provide a reporting zone score.

The Technical Working Group has worked to ensure the report card indicators and scoring methods are based on the best available science, are locally relevant, reflect changes to waterway health, and are consistent with other report card programs across the State where applicable.

What period of time do the report cards cover?

Annual reporting covers a full year stretching from July of one year to June of the next. This timeline for annual reporting has been selected because it takes into account the dry and wet season cycle, ensuring that each wet season is included in one reporting period.

For the pilot report card, all environmental indicator data has been provided from the 2013/14 year, except for estuaries where 2014/15 data was reported due to the monitoring program for estuaries being in its first year.

For the 2015 report card:

  • All water quality, coral, seagrass and agricultural stewardship data is from 2014/15 year;
  • The estuary data presented is a repeat of the pilot report card, which brings all water quality data into the same reporting year;
  • Freshwater and estuarine riparian extent, wetland extent, saltmarsh and mangrove extent, along with impoundment length indicators are only reported every four years, so scores are repeated from the pilot report card;
  • New fish barriers, fish condition and cultural heritage reporting is based on 2015/16 data; and
  • Non-agricultural stewardship data is from the 2015/16 year.

What is the difference between an indicator, index and indicator category?

coaster-diagramAn indicator is the measured feature (e.g. particulate nitrogen); an indicator category is generated by combining one or more indicators (e.g. nutrients made up of Particulate nitrogen and particulate phosphorus); an index is generated by indicator categories (e.g. water quality made up of nutrients, water clarity, chlorophyll-a and contaminants); the overall score is generated by an index or aggregation of indices (e.g. water quality, coral and seagrass). The coaster below depicts how each element is aggregated to generate a final score.

What is data confidence and how is it measured?

Every time an observation is made or a score is calculated, there is potential for error. Data confidence helps to describe how confident managers and experts are in the methods of data collection and analysis that are used to produce an indicator score reported in the report card.

The framework developed for the Reef Water Quality Protection Plan report card was utilised to provide confidence scores for the Mackay-Whitsunday report card results. Confidence for each indicator is assessed separately for each reporting zone by considering five criteria:

  1. Maturity of methodology: confidence that the method/s being used are tested and accepted broadly by the scientific community;
  2. Validation: this criterion looks at the proximity of the indicator being measured to that which is reported (e.g. remote sensing or ground surveys of an indicator);
  3. Representativeness: confidence in the representativeness of monitoring/data and considers the spatial and temporal resolution of the data as well as the sample size;
  4. Directness: this criterion is similar to “validation” but instead of looking at the proximity of the indicator, the criterion looks at the confidence in the relationship between the monitoring and the indicator being reported against;
  5. Measured error: this criterion incorporates uncertainty into the metric and uses any quantitative data where it exists.

confidence-scoringThis approach to confidence scoring enables the use of expert opinion and measured data. The overall confidence score is displayed out of five; five out of five ‘dots’ demonstrates highest confidence in the data collected for an indicator. When confidence in an indicator is not high, the results for the indicator should be viewed with caution.

Any differences in confidence scores for an indicator between reporting zones can be found in the technical reports.

What is measured to produce the score for the contaminants indicator category?

The contaminant indicator category includes measurements of five herbicides (ametryn, atrazine, diuron, hexazinone, and tebuthiuron) that inhibit plant photosynthesis (food production). These photosynthetic inhibitors can reduce the productivity of aquatic plants and corals therefore it is important to measure them in the receiving environment. There are two methods used to measure these herbicides:

  • The “PSII herbicide equivalent concentration” (PSII-HEq) method, used in the marine environment;
  • The “multisubstance-Potentially Affected Fraction” (ms-PAF) method, used in the freshwater/estuarine environment.

What is the difference between the PSII-HEq and ms-PAF methods for measuring contaminants in the marine and freshwater/estuarine environments?

The PSII herbicide equivalent concentration (PSII-HEq) method measures the toxicity of a herbicide mixture (of the five PSII herbicides only) and assumes that the herbicides act together in the receiving environment (e.g. cumulatively) resulting in more environmental harm. This means each herbicide is first compared to a reference PSII herbicide: diuron. As a reference chemical, diuron is given a value of 1. If another pesticide is more potent than diuron it is given a value of more than 1 and if another pesticide is less potent than diuron it is given a value of less than 1. Toxcitiy is then calculated from a whole water sample by multiplying the value assigned to each herbicide by its respective concentration. These values are added together to produce the final toxicity score. This method is widely used and simple to calculate, however it is limited by the fact that it can only account for chemicals that impact aquatic organisms in the same manner (e.g. chemicals that impact the photosynthetic process at the same point).

The newer multisubstance-Potentially Affected Fraction (ms-PAF) method has been developed so it can measure the impact of chemicals that impact organisms in the same or in different manners (e.g it can account for chemicals that impact cell division and chemicals that impact photosynthesis in aquatic organisms).

The ms-PAF method estimates the ecological risk of a chemical on an aquatic ecosystem by determining the percentage of species that would potentially be affected by a given concentration of a chemical. Currently, in freshwater and estuarine systems only the five PS II herbicides listed above are measured using the ms-PAF method. For these herbicides only species that photosynthesise (such as micro-algae and seagrasses) are used to assess the potential impact of PSII herbicides, as these are the most sensitive group of organisms to these herbicides.

The ms-PAF method is only used in freshwater and estuarine environments to report contaminants at this time because this newer method has not yet been finalised in the marine environment.

Why is there a difference between the freshwater and estuary scoring and marine scoring?

The Technical Working Group (TWG) has worked to ensure that the approach to scoring condition of indicators in the freshwater, estuaries and marine zones is based on the best available science, has the ability to detect change and where possible is consistent with existing programs. For the marine zone, the TWG agreed to adopt the same overarching scoring framework that is currently used in the Great Barrier Reef report card. This framework is relevant to the marine zones in the Mackay-Whitsunday region and by adopting this approach the marine results are comparable across both reporting products.
For the Freshwater and Estuary reporting zones, the TWG has developed tailored indicators and a scoring system with thresholds better suited to local conditions. For more information please refer to the program design and technical reports.

Why is there a delay between the data collection and the release of the report cards?
In preparation for a report card, data needs be collated and validated, and undergoes comprehensive analysis before it is ready to be released in the report card. This takes time, and the period required for this process to be undertaken is six to nine months. Time is also needed to ensure that the processes for collecting and assessing the data and the results themselves are reviewed by the Technical Working Group and Independent Science Panel, both of which provide scrutiny.

What is stewardship and how is it reported?
The report card includes assessments of the level of ‘stewardship’ or ‘management’ undertaken by the different industries that operate in the region that relates to waterways.

Stewardship in the agricultural sector is assessed using the Paddock to Reef reporting frameworks for each agricultural industry in the region. These frameworks describe best management practice (BMP), which is defined as low risk and low-moderate risk management. The percentage of land under best management practice in each agricultural industry is reported in the report card.

Stewardship in non-agricultural industries that operate in the region is reported based on a combination of self-assessable surveys and compliance data. This information is compared to regionally tailored criteria for planning, implementation and outcomes based on specific reporting frameworks for each industry. The assessment is reported on a scale of ‘very effective management practice’ to ‘ineffective management practice’.

For more detail see the technical reports.

Why should some results be viewed with caution?

Where confidence scores are not high, results should be viewed with caution. An example of this is the water quality indicators in the Whitsundays inshore marine zone. While the techniques used to determine the condition of water quality (grab samples for water clarity, chl-a and nutrients indicator categories) are broadly accepted by the scientific community and are directly linked to the reported indicator, caution is warranted due to low confidence in how representative the sample is.

For the Whitsunday’s zone this means that over space and time, there were not many samples used to provide a score for the whole of the reporting zone. Specifically, the data was derived from grab samples taken at four sites at just two points in time over the 2014-15 period. This low number of samples through time means that there is potential that samples are not representative of conditions throughout the rest of the year. In comparison, confidence is higher in the Central inshore marine zone because data comes from 12 sites sampled at three points in time over the 2014-15 period. Thus, the higher number of samples (through space and time) increases confidence that the rest of the reporting zone or time period is represented by the samples.

Adding to this, water quality data for the marine inshore zones comes from two programs: the Marine Monitoring Program in the Whitsundays inshore marine zone and the Port of Mackay and Hay Point ambient marine water quality monitoring program in the Central inshore marine zone. There are many challenges in combining data from different programs; different program aims mean methodologies for data collection do not always match. As a result, the data set used to report on water quality must be constrained to the data that is directly comparable between programs. Thus, only grab sample data could be used to assess water quality as this was the only consistent data between programs. A review will be undertaken in the 2016-17 period to examine how to use more of the available data, which would improve confidence in the results.

For water quality in the freshwater river basins there is also low confidence in representativeness, however this is because the overall score for water quality is derived from only one site per basin. Even though samples are taken monthly from these sites, caution should be used when interpreting results as the site sampled might not represent the rest of the waterways in the basin (for example water quality in the upper sections of a waterway may be in better condition than in the lower section where a sample site is located and vice versa).

Why is there more than one dataset making up the inshore seagrass scores for the 2015 report card?

There are currently two different seagrass monitoring programs in the Mackay-Whitsunday region: the Marine Monitoring Program (MMP) and the Queensland Ports Seagrass Monitoring Program (QPSMP). The two programs have different aims which mean that they do not measure the same seagrass attributes nor do they use the same methodologies.

The QPSMP measures seagrass composition, area and biomass indicators while the MMP reports abundance, reproduction and nutrient status indicators. The seagrass scores in the 2015 report card are based on an interim approach to reporting seagrass condition that uses the indicators from both of the programs. Indicator scores are averaged at the site level and then the average of the site scores provide an overall seagrass score.

Work is underway to completely integrate the data from the two programs so that the same indicators are used to report seagrass across the reporting zones in subsequent report cards.

Why are there so many grey areas in the 2014 and 2015 report cards?

Grey areas indicate where there is a data gap. There are a number of reasons why there are data gaps in the report card. Importantly, data used for the report card must be collected and analysed in a scientifically robust manner. In some cases, data may be available on a particular indicator, but a significant body of work may still be required to ensure that is reported in a suitable format for the report card (e.g. flow indicators across the freshwater and estuarine zones). In other circumstances, there might be multiple programs collecting data and work needs to be done to ensure data between programs is consistent and comparable (e.g. data gaps in coral in the Central inshore zone). However, for a number of situations throughout the Mackay-Whitsunday region, there are no monitoring programs in place and the condition of the indicators is completely unknown (e.g. in the southern inshore marine zone).

The Partnership has identified priorities for filling these data gaps.
Future Directions Statement

What is measured in the Cultural Heritage assessments?

Cultural Heritage site assessments were undertaken in the St Helens zone, Cape Hillsborough zone and Whitsunday, Hook and South Molle Islands zones. In total, 21 sites were assessed across the three zones that included shell middens, rock shelters, fish traps, quarries and paintings. Zones were scored against spiritual/social values, scientific values, physical condition, protection of sites and cultural maintenance indicators. For more information please visit the
Cultural Heritage Assessments page.

Why did the St. Helens zone score very poorly in the Cultural Heritage Assessments?

St Helens scored poorly in the Cultural Heritage Assessments reflecting a low score for all indicators:

  • The spiritual and social value of the zone is considered to be very low primarily because sites in this zone are not visited by Aboriginal people, are not known about in the Aboriginal community and are not talked about or sung about. A significant amount of ethnographic knowledge for this area has been lost and there is a limited connection to the cultural landscape around St Helens Beach;
  • The scientific value of the zone is considered very low as there are only two sites identified here during the 2016 field work, one of which is considered to have limited archaeological potential. There is a lack of diversity amongst the sites and there is no known potential for excavation within this zone;
  • The physical condition of this zone is considered to be very low due to the extensive disturbance visible within the sites of the zone. Sites have been impacted by uncontrolled access and the establishment of recreational grounds and modern facilities. Environmental factors such as erosion and cyclones have also affected the zone. These factors are worsened by the lack of fencing and signage at the sites;
  • The level of protection of the sites within this zone is considered to be very low; of three sites registered on the Department of Aboriginal and Torres Strait Islander Partnerships (DATSIP)in this zone, the co-ordinates were unreliable. The threats to the cultural health of this zone have not been managed and there are no protective mechanisms in place for the sties;
  • Given the lack of cultural heritage management, at both a site and zone level, the cultural maintenance rating for this zone is considered to be very low. There are no physical or digital interpretative elements for the protection of sites within the zone and only two sites were identified and / or researched for input into the Indigenous Cultural Heritage Database.

How is fish community health measured?

Two separate indicators of fish community health are measured to provide a condition score:

  • Native species richness (number of native fish species); and
  • Abundance of pest fish (proportion of sample that is pest fish)

Fish are sampled by electrofishing. To derive a fish score, samples from 2015/16 year were compared to what is ‘expected’ in a stream of similar landscape and land use attributes. ‘Expected’ fish are modelled from fish sampling data provided by Catchment Solutions, Reef Catchments, and the Department of Science, Information Technology and Innovation.

More work will be undertaken in subsequent years to improve the model of ‘expected’ fish so that it incorporates a broader range of sampling data.

Currently, the types of species (composition) that are observed during the sampling are not considered when determining a condition score. More work will be undertaken in the future to examine how fish community health can incorporate species composition in the overall score.

Are there rules for minimum data?

At the indicator level, the amount of data (sample size) needed to obtain an indicator score is considered on a case by case basis by data providers and the experts in the Technical Working Group (TWG). If the sample size is considered inadequate the indicator will not be scored.

To aggregate indicators into indicator category and index scores, decision rules were developed for the minimum proportion of information required:

≥ 50% of measured indicators to generate the indicator category score (where relevant)

≥ 60% of indicator categories to generate an index score *

Overall scores for reporting zones are presented in the report card, even if not all indices are available. However, the coaster visually shows what components contribute to the overall grade, e.g. index/indices.

*There is one exception to this decision rule. Due to the interim approach for reporting the seagrass index, which incorporates two separate programs (each reporting three of their own specific indicators), there needed to be a separate decision rule for generating seagrass index scores. The minimum information required to produce a seagrass score is ≥ 60% of indicator categories specific to at least one of the monitoring programs. This allows a seagrass index score to be generated when only one program is monitoring seagrass in a marine inshore zone.

How are overall scores aggregated?

Rules for aggregating indicators into scores in freshwater and estuarine zones are different to rules for aggregating indicators into scores in the marine zones.

Freshwater and estuarine zones:

  • Aggregated scores are only graded as ‘Very Good’ when all contributing indicators and indicator categories are ‘Very Good’;
  • Aggregated scores are graded as ‘Good’ if all contributing indicators and/or indicator categories were either ‘Very Good’ or ‘Good’ only; and
  • When contributing indicators included at least one score in ‘Very Poor’, ‘Poor’ and ‘Moderate’, all scores were averaged across contributing indicators to produce the aggregated score.

Marine zones, water quality and coral:

  • All scores were averaged across contributing indicators to produce the aggregated score

Marine zones, seagrass interim approach only:

Due to there being two separate monitoring programs (Marine Monitoring Program [MMP] and Queensland Ports Seagrass Monitoring Program [QPSMP]) reporting on their own three seagrass indicator categories, there needed to be a separate approach to aggregating into a seagrass index score.

  • When MMP data is used, a site is provided a seagrass score by averaging across measured indicator categories;
  • When QPSMP data is used, a site is provided a seagrass score by allocating the minimum score from measured indicator categories; and,
  • A seagrass index score is aggregated across sites, not across indicator categories. For this reason, sometimes the indicator category scores can appear to diverge from the overall seagrass score. An example of this is in the Northern inshore zone where the QPSMP indicator categories all scored ‘Moderate’ but the overall seagrass index for the zone was ‘Poor’.