reague⇒league ]]>

This is what Lauer and Hamilton did, Tomas. I referenced them, and described their method in my paper. The (+/-)4W/m^2 is the annual calibration uncertainty of the average CMIP5 climate model. Patrick Brown is wrong that it is time invariant. See the deduced derivation in Section 6.2 of the Supporting Information.

Tomas Forkert, “*… and consider the geographically averaged deviations allowing for cancellation of both signs.*”

Errors are combined as their root mean square, Tomas, specifically to prevent sign cancellation. Cancellation of errors over a calibration period does not improve predictive uncertainty.

I hope you read this before it is deleted. I have posted some recent replies here, which have, perhaps not so mysteriously, disappeared.

]]>I fully agree with you about the meaning of the 4w/m² being independent of time and, therefore, being applicable on all time scales. Your argument about the missing consideration of cancellation of geographical deviations in the computation of the 4w/m² is correct in my opinion. However, I think this is due to the fact that the purpose behind the derivation of this number is another one, as it is now used for by Dr. Frank. As far as I understand it, it has been calculated from averaging over a range of models and was meant to be a measure of dispersion between models. If one is interested in the LCF error for uncertainty propagation, one would have to determine this error for every model separately and consider the geographically averaged deviations allowing for cancellation of both signs. I bet, the resulting net LCF uncertainty per model relative to the change in forcing is not a multiple of 1, but significantly less. ]]>

I stumbled across the paper of Dr. Frank and in the beginning was impressed by the detailed and knowledgeable presentation of the state of the art in climate modellng. Nevertheless, I had a feeling in my stomach that is flawed in certain places. Your presentation in the video after minute 9 gives gave me the first hints towards where the flaw is located. But there remains a problem I still have and it concerns the Fi in Dr. Frank’s emulation equation. After minute 11 you presented the results of your own calculations. Where did you get the Fi from? I didn’t find any hint in Dr. Franks paper as to where these numbers come from. Is my presumption correct that they are derived from the mentioned 4W/m² by scaling them linearly with iteration time? (I presume so, since Dr. Frank insists on the units being W/m²/year). If that understanding is correct, then that’s the basic flaw in Dr. Franks argument and any physical argumentation about energy conservation or flux imbalances etc. is just misleading. The 4 W/m² have been derived from a statistical analysis of cloud cover deviations over a period of 20 years. Under the assumption of normally distributed yearly deviations (which is of course an idealization and not true!), the 4 W/m² in my opinion is to be understood as a ‘standard deviation’ (more precisely the RMS) of the deviations between calculated and observed cloud cover from the mentioned hindcasts. Has anyone observed a systematic trend in cloud cover error dependent in these deviations that depend on time? I presume not! Of course, there ARE systematic differences as can be seen in Dr. Franks Fig. 4, but they are dependent on geographical latitude and are present in all GCMs. When there is no TIME DEPENDENT deviation between simulated and observed cloud cover to be observed, then the 4W/m² is a number, that characterizes the ‘random’ process that is responsible for the deviations and is completely INDEPENDENT ON TIME. I understand that you question the way that the 4W/m² are determined, since it involves the quadrature of individual deviations with varying signs, but this involves just the way HOW the 4W/m² are computed and, thus, only the absolute size of this number.

I finally would like to sum up my argument: The exploding uncertainty ranges that Dr. Frank computes are the result of the flawed assumption that an RMS of 4W/m² in cloud cover that has been determined from a 20-year time series is valid just for the first year of the analysis and scales linearly with projection time. That is the only way how one could create an uncertainty that grows so strong by using the standard formula for error propagation of statistically independent quantities.

What is your opinion about this? ]]>

I’d be interested to see you post evidence that any working skeptical scientist gets money or kind from Koch. Do you have any evidence, or do you just make stuff up?

]]>https://www.frontiersin.org/articles/10.3389/feart.2019.00223/full

Notice the reviewers are Davide Zanchettin and Carl Wunsch, both very respected climate scientists. And my editor, Jing-jia Luo, is a physical meteorologist. You have no room there for the political smear that seems to come easily to you.

As I told Harry, my only income is from my professional employment at SLAC National Accelerator Laboratory, Stanford University. I’ve received no pay or remuneration in cash or kind from Koch, from Heartland, from oil billionaires or from anyone else. Your snarky little question notwithstanding.

As to the future, my paper is the definitive analysis that climate models have no predictive value. There is no indication that CO2 emissions have wormed, are warming, or will warm the climate. Nothing that is happening now is in any way distinguishable from natural variations.

If you’re worried about your descendants, then worry about how they will fare in the world of poverty, misery, and early death that will result from the knifing of clean cheap energy being attempted by the deep green left, progressive nut-cases, the IPCC, and the UN.

]]>All my income derives from my professional employment as a physical methods chemist at SLAC National Accelerator Laboratory, Stanford University.

I’ve actually spent a few thousand $ of my own money on my climate work. Negative income.

Apart from that, not an intelligent question, Harry

]]>