What are the differences between the Great Barrier Reef and Mackay-Whitsunday report cards?
What Paddock to Reef data sets are used in the Mackay-Whitsunday report card?
The Mackay-Whitsunday report card draws on data from the Paddock to Reef Program to report on seagrass, coral and water quality in our inshore marine zones, water quality in our catchments, and management practice for sugarcane, grazing and horticulture.
How does the geographical area of the Mackay-Whitsunday report card differ to the Mackay Whitsunday region in the GBR report card?
The geographical area of the Mackay-Whitsunday report card includes the Don, Proserpine, O’Connell, Pioneer, and Plane river basins. The Mackay-Whitsunday report card also reports on offshore and inshore waters separately and divides the inshore area into four separate zones; the Northern, Whitsunday, Central and Southern zones. In comparison, the GBR report card does not include the Don basin nor offshore waters when referring to the Mackay Whitsunday region, and reports on the inshore zone as a whole.
Why might there be differences in scores for seagrass, coral and water quality in the inshore marine zone for the Mackay-Whitsunday report card compared to scores in the GBR report card, even though the same data is used?
The Mackay-Whitsunday report card draws on data from the Abbot Point and Mackay and Hay Point ambient monitoring programs, commissioned by North Queensland Bulk Ports, for seagrass, coral, and water quality. This data is combined with Paddock to Reef seagrass, coral, and water quality data to calculate condition scores for the four inshore zones. This additional data, along with reporting the data in four separate inshore marine zones, can mean that scores for seagrass, coral, and water quality in the Mackay-Whitsunday report card do not always exactly match the scores in the Mackay Whitsunday region in the GBR report card.
What is the difference in reporting pollutants (DIN, Sediment and Pesticides) in catchments, between the Mackay-Whitsunday report card and the GBR report card?
The Mackay-Whitsunday report card reports pollutants including Dissolved Inorganic Nitrogen (DIN), sediment, and pesticides as one index, ‘water quality’, which is based on the annual concentration of these pollutants in the waterways. This is different to the GBR report card which reports these same pollutants as estimates of the annual average reduction in human caused DIN, sediment and pesticide loads at the end of catchments, which is based on modelling. Therefore, even though the same pollutants are being reported in catchments, the two report cards are not reporting them in the same way and therefore they cannot be expected to match, nor can they be directly compared.
Why might there be differences in management practice scores for sugarcane, grazing and horticulture?
Management practice data is reported at the region level, but because the Mackay-Whitsunday report card includes the Don basin (and thus is not the same as the MW NRM region), this can mean that management practice data may be different to the GBR report card. The GBR report card does not include the Don basin when reporting the Mackay Whitsunday region. This is particularly relevant to grazing and horticulture management practice reporting. Because there is minimal horticulture in the Proserpine, O’Connell, Pioneer, and Plane basins, horticulture is not reported in the Mackay Whitsunday region for the GBR report card. However, horticulture is a key land use in the Don basin, which means it is included in the Mackay-Whitsunday report card.
Mackay-Whitsunday report card questions
What are the report cards based upon?
Priority aquatic ecosystem indicators that are suitable for measuring waterway health were selected based on the relevant values and pressures in the Region. Each indicator has a relevant benchmark that signals if it is in very good or very poor condition. Regional data for each indicator is compared to benchmarks using indicator-specific methodology to produce a score. Scores correspond to one of five condition grades: Very Good (A), Good (B), Moderate (C), Poor (D), Very Poor (E). Scores for each indicator are aggregated (rolled-up) into categories and indices, and these scores are used to produce an overall score for an individual reporting zone in the Region.
The Technical Working Group (TWG) has worked to ensure the report card indicators and scoring methods are based on the best available science, are locally relevant, reflect changes to waterway health, and are consistent with other report card programs across Queensland where applicable.
What is the difference between an indicator, index and indicator category?
An indicator is the measured feature in the ecosystem (e.g. particulate nitrogen); an indicator category is generated by combining one or more related indicators (e.g. the category ‘nutrients’ is made up of particulate nitrogen and particulate phosphorus); an index is generated by combining related categories (e.g. the index ‘water quality’ is made up of nutrients, water clarity, chlorophyll-a and pesticides); the overall score is generated by one or more index (e.g. water quality, coral, seagrass and fish indices can make up an inshore marine zone score).
How has freshwater and estuary scoring improved in the 2016 report card?
The TWG has worked to ensure that the approach to calculating environmental grades across reporting zones align. Scores in the freshwater and estuaries are now standardised to fit into the same overarching scoring framework that is currently used in the marine zone, which was adopted from the Great Barrier Reef report card. For more information please refer to the technical reports.
Are there rules for minimum data?
At the indicator level, the amount of data (sample size) needed to obtain an indicator score is considered on a case-by-case basis by data providers and the experts in the TWG. If the sample size is considered inadequate the indicator will not be scored.
To aggregate indicators into category and index scores, decision rules were developed for the minimum proportion of information required:
- ≥ 50% of measured indicators to generate the indicator category score (where relevant)
- ≥ 60% of indicator categories to generate an index score *
Overall scores for reporting zones are presented in the report card, even if not all indices are available.
*Due to the interim approach for reporting the seagrass index, which incorporates two separate programs (each reporting three of their own specific indicators), there needed to be a separate decision rule for generating seagrass index scores. For more information please refer to the technical reports.
Why is there more than one dataset making up the inshore seagrass scores for the 2016 report card?
There are currently two different seagrass monitoring programs in the Mackay-Whitsunday region: the Marine Monitoring Program (MMP) and the Queensland Ports Seagrass Monitoring Program (QPSMP). The two programs have different aims, which means that they do not measure the same seagrass attributes nor do they use the same methodologies.
The QPSMP measures seagrass composition, area and biomass indicators while the MMP reports abundance, reproduction and nutrient status indicators. The seagrass scores in the 2016 report card are based on an interim approach to reporting seagrass condition that uses the indicators from both of the programs. Indicator scores are averaged at the site level and then the average of the site scores provide an overall seagrass score. Work is underway to completely integrate the data from the two programs so that the same indicators are used to report seagrass across the reporting zones in subsequent report cards.
What period of time do the report cards cover?
Annual reporting covers a full year stretching from July of one year to June of the next. This timeline for annual reporting has been selected because it takes into account the dry and wet season cycle, ensuring that each wet season is included in one reporting period. The below timeline shows when the data for the 2016 report card was collected.
Why is there a delay between the data collection and the release of the report cards?
In preparation for a report card, data needs be collated and validated, and undergoes comprehensive analysis before it is ready to be released in the report card. This takes time, and the period required for this process to be undertaken is six to nine months. Time is also needed to review the processes for collecting data, along with the data itself. This process is undertaken by both the Technical Working Group, and the Reef Independent Science Panel.
The Partnership is committed to reducing the time between data collection and report card release, to improve the timeliness and relevance of the report card.
What is data confidence and how is it measured?
Every time an observation is made (data is collected) or a score is calculated, there is potential for error. Data confidence helps to describe how confident managers and experts are in the methods of data collection and analysis that are used to produce an indicator score reported in the report card. Confidence surrounding the report card grades is measured on a five-point scale. This tells us how confident we are, from low to high, that the calculated grade reflects the true condition of the indicator. For more information on data confidence click here.
Why should some results be viewed with caution?
Where confidence is not high (a score of three or lower), results should be viewed with caution. An example of this is for water quality in the freshwater river basins, which has a confidence score of three. This is because the overall score for water quality is derived from only one site per river basin. Even though samples are taken monthly from these sites, caution should be used when interpreting results as the site sampled might not represent the rest of the waterways in the basin (for example water quality in the upper sections of a waterway may be in better condition than in the lower section where a sample site is located and vice versa). For more information see our confidence page and the interactive results page, where confidence for each indicator is shown.
What is measured to produce the score for the pesticides indicator?
Currently, the pesticides indicator includes measurements of the concentration of herbicides in the water column that impact on plant photosynthesis (food production). These herbicides can impact aquatic plants and corals, so it is important to measure them in our waterways. Other pesticides, like insecticides and fungicides, will be included in our reporting as our understanding improves of how they impact on plant and animal species in our waterways.
Is there a difference between reporting pesticides in the marine and freshwater/estuarine environments?
The method for reporting pesticides is different in freshwater/estuarine waterways compared to in the marine environment. The “multisubstance-Potentially Affected Fraction” (ms-PAF) method is used in freshwater/estuarine waterways and the “PSII herbicide equivalent concentration” (PSII-HEq) method is used in the marine environment.
What is the difference between the ms-PAF and PSII-HEq method for reporting pesticides?
The ms-PAF method, used in freshwater and estuarine waterways, reports the impact of pesticides by reporting on the percentage of species affected by the toxicity of a mixture of pesticides. The PSII-HEq method reports directly on the toxicity of the pesticides in a mixture instead of on the percentage of species that would be affected by the pesticides.
What is the ms-PAF method for reporting pesticides?
The newer multisubstance-Potentially Affected Fraction (ms-PAF) method has been developed so it can measure the impact of all pesticides in a mixture, regardless of whether they affect organisms in different ways (e.g. this method can account for chemicals that impact cell division and chemicals that impact photosynthesis).
The ms-PAF method estimates the ecological risk of a chemical on an aquatic ecosystem by determining the percentage of species that would potentially be affected by a given concentration of a pesticide. When multiple pesticides are detected, the percentage of species affected by the pesticides are added together. Currently, in freshwater and estuarine systems only herbicides that impact on photosynthesis are reported using the ms-PAF method. For these herbicides only species that photosynthesise (such as micro-algae and seagrasses) are used to assess the potential impact of these herbicides, as these are the most sensitive group of organisms to these herbicides. As our understanding of the toxicity of other pesticides expands, these pesticides will progressively be included in reporting.
How has ms-PAF reporting improved in the 2016 report card?
The 2016 report card has moved from reporting on just the five herbicides that impact photosynthesis, which were previously identified as the pesticides of greatest concern to the health and the resilience of the Great Barrier Reef (ametryn, atrazine, diuron, hexazinone and tebuthiuron), to thirteen herbicides that impact photosynthesis (ametryn, atrazine, diuron, hexazinone, tebuthiuron, bromacil, fluometuron, metribuzin, prometryn, propazine, simazine, terbuthylazine, terbutryn).
In future report cards, ms-PAF will assess up to 28 of the pesticides detected in the Region, including herbicides, insecticides and fungicides.
What is the PSII-HEq method for reporting pesticides?
The PSII herbicide equivalent concentration (PSII-HEq) method measures the toxicity of photo-system II herbicides in a mixture. These herbicides impact on plant photosynthesis (food production). The method assumes that the herbicides act together in the receiving environment (e.g. cumulatively) resulting in more environmental harm. To calculate PSII-HEq each herbicide is compared to a reference PSII herbicide: diuron. As a reference chemical, diuron is given a value of 1. If another pesticide is more potent than diuron it is given a value of more than 1 and if another pesticide is less potent than diuron it is given a value of less than 1. Toxicity is then calculated from a whole water sample by multiplying the value assigned to each herbicide by its respective concentration. These values are added together to produce the final toxicity score. This method is widely used and simple to calculate, however it is limited by the fact that it can only account for chemicals that impact aquatic organisms in the same manner (e.g. chemicals that impact the photosynthetic process at the same point such as PSII herbicides).
How is fish community health measured in our freshwater basins?
Two separate indicators of fish community health are measured to provide a condition score:
- Native species richness (number of native fish species in a sample); and
- Abundance of pest fish (proportion of a sample that is pest fish)
Fish are sampled by electrofishing. To derive a fish score, samples from the 2015/16 year were compared to what is ‘expected’ in a minimally disturbed reference stream with similar landscape attributes. ‘Expected’ fish are modelled from fish sampling data provided by Catchment Solutions, Reef Catchments, and the Department of Science, Information Technology and Innovation.
Currently, the types of species (composition) that are observed during the sampling are not considered when determining a condition score. More work will be undertaken in the future to improve the ‘expected’ model and examine how fish community health can incorporate species composition in the overall score.
What is stewardship?
We define stewardship as “responsible and sustainable use and protection of our water resources, waterways and catchments to enhance the social, cultural, environmental and economic values of the Region”.
The Partnership assesses annually how our regional industries are performing against stewardship criteria. Stewardship is important to include in our annual report cards as it provides information on the actions landholders and organisations in the Region are implementing that will provide benefits to ecosystems. For more information visit our stewardship page.
What is measured in the Cultural Heritage assessments?
Cultural Heritage site assessments were undertaken in 2016 and were reported on in the 2015 report card. Assessments were undertaken in the St Helens zone, Cape Hillsborough zone and Whitsunday, Hook and South Molle Islands zones. In total, 21 sites were assessed across the three zones. This included shell middens, rock shelters, fish traps, quarries and paintings. Zones were scored against spiritual/social values, scientific values, physical condition, protection of sites and cultural maintenance indicators. For more information please visit the Cultural Heritage Assessments page.
Why are there so many grey areas in the report cards?
Grey areas indicate where there is a data gap. There are a number of reasons why there are data gaps in the report card. Importantly, data used for the report card must be collected and analysed in a scientifically robust manner. In some cases, data may be available on a particular indicator, but a significant body of work may still be required to ensure that is reported in a suitable format for the report card (e.g. flow indicators across the freshwater and estuarine zones). In other circumstances, there might be multiple programs collecting data and work needs to be done to ensure data between programs is consistent and comparable (e.g. data gaps for fish across the Region). However, for a number of situations throughout the Mackay-Whitsunday Region, there are no monitoring programs in place and the condition of the indicators is completely unknown (e.g. in the southern inshore marine zone). Work is underway to fill all of these data gaps.
What is being done to improve the report card?
A number of projects are underway to fill current data gaps or to expand on existing programs:
- A project is underway to develop appropriate indicators and methodology to report freshwater flows in our freshwater basins and estuaries, in collaboration with the Wet Tropics Healthy Waterways Partnership. This will allow the flow indicator to be reported in the next report card for the first time.
- The Partnership has established a new monitoring program for water quality, seagrass and coral in the southern marine inshore zone in September 2017. Data from this monitoring program will contribute to the first scores for the southern marine zone in the 2018 report card.
- Additional end of system water quality monitoring sites at the Don River, Proserpine River and Plane Creek have been established since December 2016 as part of the expansion of the DSITI Great Barrier Reef Catchment Loads Monitoring Program. This will provide water quality data that will allow reporting in the Don and Proserpine basins for the first time in the 2017 report card and will improve our understanding of water quality in the Plane Basin. Thus, there will be improved confidence in water quality results in the Plane basin.
- Additional funding is being made available over five-years through the Queensland Government’s Reef Water Quality Program to support monitoring for the Mackay Whitsunday and Wet Tropics regional report cards. This will ensure that the existing estuarine and freshwater water quality monitoring is maintained and where possible expanded, and that freshwater fish assessments are continued in future report cards.
- Where possible, the Partnership is working with relevant citizen science organisations and initiatives to include relevant datasets in future report cards. For more information about citizen science organisations in the GBR Region click here.
- The Partnership is working within a number of the Reef 2050 Integrated Monitoring and Modelling Reporting Program’s (RIMMReP) working groups, including the Human Dimensions group, to develop suitable indicators/monitoring programs that can be used in future report cards. The Human Dimensions group covers social, economic, governance and cultural heritage components of the Reef 2050 Plan. For more information on RIMMReP click here.
- Future report cards will likely utilise a series of updated stewardship frameworks to assess our non-agricultural industries in the Region. An independent review of the current frameworks utilised for the 2014 pilot, 2015 and 2016 report cards was undertaken in 2017. The results of this review will be used to refine and improve the stewardship assessment frameworks utilised for future report cards.