I used to work in commercial manufacturing, so I have some familiarity with ISO requirements, such as ISO 9000. Those requirements were no guarantee that things would be done correctly, but instead addressed whether or not a manufacturer had their own standards and followed them. If problems did arise then there was a plan in place for how to address them. One way to make compliance easier was to have an internal ISO plan that was intentionally "loose." This possibility existed because manufacturers wrote their own ISO plans. While they had utility, there is also a tremendous variance in what they cover and how faithfully they are applied. Companies also were required to hire their own ISO inspectors, and it always struck me as odd, for example, that the company that I worked for in California always brought in their ISO inspectors from Ireland because they found them easy to work with.
As for what checks and balances there are in the development of climate models, for one thing they are peer-reviewed and often the result of collaborations between many people and institutions. The ones in public use are scientifically validated. The Community Earth System Model is an example of one such climate model
http://www.cesm.ucar.edu/
You can, if you so desire, become trained on use of the model, download the source code, run it, etc. On the web site you can find information on the "governance" of the model, which is probably the closest thing to ISO certification. Model units are controlled by working groups under the guidance of a scientific steering committee. Because of peer review and the open source nature of the model, and the requirement for scientific validation, I would say that the model is at least as rigorously checked as (and probably much more than) most commercial software. The proprietary nature of commercial software means that much of the code is unavailable for public inspection.
EDIT for jim z: I agree that models may be used for insight, and within their limitations quantitative results may be obtained from them, but they do need validation. Whether or not the study you mentioned was a valid use of the model and whether the results were justified I don't know.
I was actually trying to answer the question that was asked--that was about software assurance in case you missed it (admittedly the question was obscured by all the "additional details" that even Ottawa Mike acknowledged were irrelevant).
EDIT for Ottawa Mike: As usual, I don't think you're really interested in the answer to your "question." This is not commercial software, so expecting it to have commercial software assurance testing is a bit silly. Most software does not go through the software assurance testing you're talking about--perhaps it's expected in the banking industry or transportation applications, but this are neither of those. If the software crashes nobody gets hurt, they just try to figure out what went wrong and re-run the simulation. There is also no need for security features like you might find in banking software--although with groups like the Heartland Institute or the hackers of the UEA emails running around, perhaps they DO need those security features.
Again, this code is open (unlike virtually ALL commercial software), so if you think there's a problem with it, you can analyze the code line-by-line and pass what you find along to the working groups. As is your wont, you have no substantive criticism so you sling mud and hope some if it sticks. It mostly sticks to you.