If you've no account register here first time
User Name :
User Email :
Password :

Login Now

Report Slams Eco-Ratings ‘Secret Sauce’

An evaluation of 21 sustainability ratings has found that most fall short on transparency and lack sufficient quality controls.

The research by think tank and consultancy SustainAbility also found that ratings systems are often too general in their stated goals and too complex in terms of their criteria and scoring.

“We also note inconsistency between some raters’ stated objectives of helping companies improve their performance and disclosure and their own lack of transparency and availability,” the report said.

SustainAbility said that too often, raters do not sufficiently explain their methodologies, treating those methods like a “secret sauce”.

“If raters truly want to help companies improve, they must provide clear blueprints for how to improve ratings performance (e.g. an “A” requires a company to do X, Y and Z),” the report added.

Even raters concerned about disclosing commercially sensitive information to competitors should offer signficiant disclosure to the companies they have rated, SustainAbility argued.

The report said that raters showing good practice on disclosure of methodology were CDP, Climate Counts, FTSE4Good, GS Sustain, Global 100 and the Access to Medicine Index.

SustainAbility argued that there is too little focus on quality control among the ratings. Many ratings systems rely on outside sources such as the media or non-profits. This system can’t survive as the users of ratings go more mainstream, Sustainability said. “For example, as major asset managers increasingly consider sustainability factors in their investment processes, they will look to ratings as ‘seals of approval’ on companies and will thus demand strong processes behind these seals.”

One example of a promising approach on quality is the Corporate Sustainability and Responsibility Research Voluntary Quality Standard, the report said.

Ratings systems overlapped significantly, the study said, and their objectives were often too general. Too many raters failed to explain why their rating system is distinct.

“We do not believe that the typical subject of ratings — large, multinational companies — need to be faced with multiple ratings to understand that improving performance and transparency is important,” the report said. “This may have been the case a decade ago, but today most companies appreciate this. Thus, ratings require greater purpose than the act of rating to increase transparency to justify themselves.”

The research found that some of the simplest ratings systems are also the best. More straightforward systems are more likely to encourage company participation, and more likely be used by consumers and investors.

Other findings:

  • Raters too often cite their web hits or media coverage to argue for their success, rather than explaining the direct impact they have on companies. Some raters showing good validation practices were CSRHub, FTSE4Good and Trucost.
  • Too many ratings focus on past or current performance. Ratings that will emerge as winners will focus not on greenhouse gas emissions, but on measuring revenue driven by climate-oriented products and services.
  • Raters must invest considerably more time directly engaging with the companies they rate. The majority of ratings today are based on arms’-length assessments, but getting closer to companies can better equip raters to verify publicly reported information and help companies to improve their performance.

The report evaluated companies on 13 criteria in four categories: governance and transparency, quality of inputs, research process, and outputs. Each participating rating completed a questionnaire and participated in a conference call with SustainAbility.

Two out of 23 companies invited to participate in Rate the Ratings declined the offer.

The 21 ratings systems assessed were:

— Access to Medicine Index

— ASSET4 (Thomson Reuters)

— Bloomberg ESG Disclosure Scores

— Carbon Disclosure Project

— Murky Waters: Corporate Reporting on Water Risk (Ceres)

— Climate Counts

— CR Magazine 100 Best Corporate Citizens

— CSRHub

— Dow Jones Sustainability Indexes

— EIRIS

— Ethisphere’s World’s Most Ethical Companies

— FTSE4Good Index Series

— The Global 100 Most Sustainable Corporations in the World (Global 100)

— GoodGuide

— GS SUSTAIN

— Maplecroft Climate Innovation Indexes (CIIs)

— Newsweek Green Rankings

— Oekom Corporate Ratings

— Sustainalytics

— Trucost Environmental Impact Assessment

— Vigeo

Packaging LED & Advanced Rooftop Unit Control (ARC) Retrofits for Maximum Performance
Sponsored By: Transformative Wave

  
Leveraging EHS Software in Support of Culture Changes
Sponsored By: VelocityEHS

  
Right On Time
Sponsored By: Gensuite

  
The EHS Guidebook: Selecting, Implementing, and Using EHS Software Solutions
Sponsored By: EtQ

  

6 thoughts on “Report Slams Eco-Ratings ‘Secret Sauce’

  1. While Bloomberg does not do tradiational ratings – they are not ‘performance’ ratings but are rather ‘disclosure scores’, our absence from the above article in terms of good disclosure is not supported by the information in the report. On page 17, for instance, “In terms of disclosure of sources, most raters outline source types (e.g. media scans, NGOs) as opposed to identifying specifics (e.g. Wall Street Journal, Sierra Club). At least one — Bloomberg — makes it easy for users to trace the information used in its scoring by linking each data point to the original source (which are generally corporate websites or sustainability reports).”

    And on page 19, “Within the Bloomberg system, subscribers can click on any figure within the ESG scores to see the original source for the figure. Bloomberg’s methodology gives additional points to companies that have externally-verified data.”

  2. I think “slams” is the wrong word. What SustainAbility is properly pointing out in their study is that different rating systems have different purposes and objectives, and they are not always clear in presenting what those objectives are.
    If a rating system is mean to identify opportunities to profit by investing in companies that are sustainable and have good returns, why would one want to publish the methodology? Will Apple tell HP how it iPad works? Would the NY Times share research with the Wall St. Journal before publishing it? Probably not, and in the competitive investment world, there are good reasons not to be totally transparent, otherwise Goldman Sachs will suck up all of your good ideas after you did a lot of research to find a relationship between a sustainability factor and a company’s profits, and they will get rich and you won’t.
    On the other hand, if you are just rating a firm’s sustainability to see who is a “good” or “bad” firm, you do need transparency so people know what you consider good or bad, as we all don’t always agree on those value-based choices.
    Ratings will always evolve and change over time, but they can never be perfect, as S&P and Moody’s found out in the real estate mess- define what you are trying to rate, and try to do them as best you can!
    John Cusack, former startup CEO, Innovest Strategic Value Advisors

  3. The article suggests that “Too many ratings focus on past or current performance.” That information is, however, valuable in the process of building competitive intelligence because public GHG disclosures can be used to unravel what might be going on inside the arch rival. How fast did they upgrade to the energy-efficient production lines? How much GHG did they squeeze out of their tier 1 suppliers? How much avoidable energy use is the competitor absorbing in its cost structures? Should we launch our new product on the market in 1 month or can we afford to wait 6 months because the GHG disclosures indicates that the competitor won’t catch up for 18 months after that? GHG-informed Business Intelligence is becoming a powerful means of devising competitive strategy for the cost-reducing GHG-reducing enterprise. These matters are also important for investment decisioning. A new world of competitive intelligence and investment decision analysis is fast emerging.

  4. Having recently set up GRI’s Focal Point USA in NY, and having a fairly thorough understanding of ratings, rankings and listings, I have to say I am hearing continued frustration from the corporate community about the proliferation of ratings, rankings and listings. It seems like every other day there is yet another list to get on, or get off.

    If you are a company trying to efficiently and effectively address the complex sustainability issues that face you, there is a risk that resources are focused on these ratings, rankings and listings instead of on reducing or improving impacts and overall sustainabilty performance.

    The SustainAbility reports (the series) provide a very good overview of the situation and then dive as deeply as possible into the inner workings as is possible given the entrepreneurial spirit that surrounds the sustainability field and the proprietary nature of these rankings, ratings and listings.

    What is a hopeful sign for all involved is when ratings, rankings and listings reference their use of the GRI guidelines and/or at least specific indicators. Bloomberg and CRD Analytics both publicly mention their use of GRI in their methodology. This is not only of benefit to them (if you’re reading thousands of sustainability reports, wouldn’t you like to have consistent and comparable data?), but also to many US and global companies that have been doing GRI reporting for many years.

    My own understanding from talking to many other ratings, ranking and listings firms is that they too have benefited from the many years of effort put in by GRI and its stakeholders. The GRI Guidelines were built with companies and stakeholders from all parts of the global economy over the past 12-years – how can these Guidelines have NOT found there way into the ratings, rankings and listings that are rapidly proliferating the market?

    I would ask that any ranking, rating or listing related activities look at the GRI Reports List and all the reporters that are already reporting first, use this information and help us all show recognition for those companies’ efforts sustainability, determine when and whether your own research methodology references specific GRI indicators and reference this when and where applicable.

    If you are a company doing a GRI report and facing continual questions that are already answered in your GRI report, reinforce this in your response to the researcher firm. Ask them how they have integrated GRI into their own methodology and ask whether they publicly state this and whether it is done consistent with the same guidelines you are following.

    For the sake of our field – sustainability – reduce, reuse and recycle!

    Bob Massie and Allen White created an amazing idea 12-years ago that has become the world’s most widely use sustainability reporting framework. Don’t toss aside the tens of thousands of professional hours of hard work that have gone into making this brilliant idea a reality.

    Get involved – GRI is built around a multi-stakeholder process – now is the time – we are moving toward G4.

    Best;
    Mike Wallace
    Director Focal Point USA

  5. This is a great piece. And Mike, we’re big fans of GRI here at the Sustainable Performance Institute, so I’m glad you commented on this.

    My experience working in the A/E/C industry is that there is a lot of frustration on the proliferation of product and project certifications. On the product side, there are many approaches – from indoor environmental quality to life cycle assessment to waste management – and little consistency across certifications (though UL Environment, with their recent acquisitions, seem poised to provide a hub for consistent product certifications).

    On the project side, LEED is dominant. People’s frustration with LEED is that it is at arms-length but the fees and support systems are subpar. It has created a whole industry of LEED consultants. One thing that the USGBC does well, though, is to enable members to comment on and vote on the rating system. No, we’re not talking about in-person focus groups. But there are public comment periods as well as discussions at Greenbuild and other events. Other organizations could learn from their consensus-driven process.

    Here at SPI, we’ve seen a need to help A/E/C firms deliver LEED and green projects without having the reinvent the wheel for every project. We’ve created a set of guidelines for firms to use so that they can institutionalize sustainability from HR policies to project workplans to product specs. While focused on firms that deliver sustainability services, the guidelines and Certification are a means of providing owners with quality control and differentiation. We’ve held in-person discussions and have our first Public Comment period taking place through March 18. We’re also learning a lot from the ANSI process and are implementing processes to abide by those guidelines as well.

  6. This article also misses the point which is that competition in the ratings industry is very good for the development of thinking about sustainability issues. Monocultures are not healthy in nature, neither are they healthy in commerce. There is an inherent contradiction in trying to impose a one size fits all methodology from a narrow range of possible providers. It is that kind of thinking which got us into this financial mess in the first place. Demonizing entrepreneurs is not a sustainable behaviour if you want the concepts to grow and evolve.

Leave a Comment