Oomph

I am a big fan of Deirdre McCloskey. One of the things she’s always carrying on about is ‘How big is big?’. She argues that in much empirical analysis that people confuse statistical significance with substantive significance. In a play on words, she describes this as being the standard error of empirical analysis. For readers who are not statistically literate the standard error refers to the precision of the estimate that the analysis has produced. McCloskey argues that it isn’t enough for an estimated coefficient to have a small standard error (i.e. be estimated with a high degree of precision) it must also have ‘oomph’. I agree. So a highly statistically significant relationship might actually have a very small effect and so not be of substantive importance. So it’s not really enough to just look at the statistical significance of any relationship, we also need to think about the size of the relationship. McCloskey talks about this in her book, joint with Stephen Ziliak, The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives and an entire issue of the 2004 Journal of Socio-Economics (subscription required) is dedicated to discussing the issue.

In matrix form, this is the idea.

When doing an empirical test we should look for substantive significance and statistical significance and not just one or the other. We all agree that when a hypothesis lacks both statistical significance and the statistic lacks oomph then we should reject whatever hypothesis we are investigating. Similarly when we have statistical significance and oomph then we should accept whatever hypothesis we’re investigating. (This language might annoy some purists, we don’t accept hypotheses we fail to reject them etc. etc.). The Reject (1) category is what annoys McCloskey so much; a coefficient that has statistical significance but no oomph. The Reject (2) category is more controversial, to my mind. McCloskey also makes the argument that our conventional t values for hypothesis testing are arbitrary – of course she is right, they are arbitrary. She seems to suggest that it depends from case to case. I do have some sympathy for that argument, but I am uncomfortable with the position. My view is closer to Jeffrey Wooldridge’s (2004) Journal of Socio-Economics position

While I completely agree that statistical significance does not imply economic significance, I think pushing an economically large effect that is statistically insignificant is usually a stretch.

Results in this category shouldn’t be ignored; I think a case for much more work can be made for this category of result.

So why am I carrying on about this? In Phil Jones’ BBC interview we see an example of McCloskey’s standard error. Recall questions B and C.

B – Do you agree that from 1995 to the present there has been no statistically-significant global warming
Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.

C – Do you agree that from January 2002 to the present there has been statistically significant global cooling?
No. This period is even shorter than 1995-2009. The trend this time is negative (-0.12C per decade), but this trend is not statistically significant.

In my previous discussion of this interview I made the point that Jones should have told us what his significance levels actually were (so a standard error or a t-stat or a p-level would have been very useful). I have guesstimated his analysis in e-views using data from the CRU website. I estimated the following equation:
Temp = constant + B*Time Trend + AR(1) + error
I included the AR(1) term to take care of any unit-root problems and by using the Newey-West correction was able to get results very similar to what Jones describes in his interview.

The regression corresponding to question B:

The regression corresponding to question C:

The coefficient I estimate for question B is slightly smaller than Jones’ estimate (0.11C per decade to his 0.12C per decade) and my coefficient for question C is also slightly smaller (-0.14C per decade to his -0.12C per decade) [-0.14 is a smaller number than -0.12, don’t get confused by the minus signs]. Neither of those coefficients is significant at the 95 percent significance level, as Jones says. But, as he says, the question B coefficient is very close to significance – it has a p-value of 0.0512. If we were to accept a 90 percent significance level it would be statistically significant (1 – 0.0512 = 94.88 percent). So why doesn’t Jones say that? I’ve often seen people making the argument that a 90 percent significance is okay.

I think the answer is in the question C coefficient. There he says the trend is not statistically significant. But look at the p-value 0.0723. It is clearly not statistically significant at the 95 percent level, but it would be significant at the 90 percent level (1 – 0.0723 = 92.77 percent). In other words, Jones cannot claim that the answer to question B is statistically significant without then conceding that the answer, the negative coefficient, to question C is also statistically significant at the 90 percent level. (The p-value for question C is very sensitive to the Newey-West adjustment, without that adjustment it is statistically significant at the 95 percent level.) Under the standard error approach that McCloskey so hates, it would be game over.

So it looks to me that he is playing silly-buggers with significance levels. In his July 5, 2005 email Jones had indicated

The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only seven years of data and it isn’t statistically significant.

Well, maybe now it is; the trend coefficient from 2003 looks to be statistically significantly different from zero at the 90 percent significance level.

Some caveats: I am not an econometrician. I have guesstimated what Jones did. What I have done is very rough and ready. He may have done something very different and the significance tests in his analysis might be very different to those I have reported here. He should post his tests and the significance levels on the web so that we can all have a look at them.

This entry was posted in Phil Jones Test. Bookmark the permalink.

125 Responses to Oomph

  1. Butterfield, Bloomfield & Bishop

    let bme outsource this to Brad De Long

    anybody who has taken even one semester of statistics does–that “no trend” does not mean the same thing as “no statistically significant trend,” that you are unlikely to find statistical significance when you restrict your attention to a short period because your statistical tests then lack power, and that everyone literate in statistics asked for their point estimate of the warming trend since 1975 would say that it is almost as much as the overall trend since 1860: 0.012C per year as compared to 0.015C per year.

  2. daddy dave

    This problem with significance testing has been around for a while, and several people have noted it. There are remedies. The best way to measure “oomph” is with effect size, which is, in simplistic terms, the size of the effect divided by the standard deviation (not standard error). A good thing about it is that unlike significance, it’s immune to large samples. There’s already a trend in many research areas to incorporate effect size into articles; in other words, you have to provide effect size estimates along side significance.
    Unfortunately, the climate science guys never got the memo, and so their scientific logic, in which they rely exclusively on significance tests, is at least ten years out of date.

  3. daddy dave

    actually, the best best way to make sure your effect is not a type 1 error is to replicate from scratch, but that’s really expensive.

  4. JC

    Homes:

    You used 3 years of data to support the argument that Nazi Germany was a hot Keynesian success story. How does your rant about statistics figure in this, pal?

  5. Butterfield, Bloomfield & Bishop

    Oh dear. evaluating an economy is not the same as evaluating temperature over time.
    Apples and Oranges

    By the way if you ever read any article in a journal or book you will find no-one disagrees that Germany had a recovery whilst there is no-one supporting your absurd position.

  6. daddy dave

    Homer, unlike JC I avoided replying to you because you clearly have no clue about this topic.
    for one thing:
    anybody who has taken even one semester of statistics does–that “no trend” does not mean the same thing as “no statistically significant trend
    .
    “No statistically significant trend” simply means:
    “If there’s a trend, we haven’t found it. It isn’t in this data.”

  7. Peter Patton

    Very interesting post. One area where I frequently see the importance of this distinction is in debates about genetics.

    For example, animal rights activists, who try to include chimpanzees under the heading “humanity” will focus on the 98.4% (or whatever) shared DNA chimps and humans have. However, given the incredible Oomph we humans attribute to people with IQs of 140 versus 50, or the beauty of Angelina Jolie and Brad Pitt versus even above average looking people or between Keith Richards’ guitar riffs and our teenage garage band, these very, very high statistical correlations mean squat.

  8. Butterfield, Bloomfield & Bishop

    yes DD you have just shown you can’t read at least two things.

    You have just shown you have little idea of what the topic actualy is.

  9. “By the way if you ever read any article in a journal or book you will find no-one disagrees that Germany had a recovery whilst there is no-one supporting your absurd position.”

    Read Tooze Homer.

    Well thank God you’re outsourcing your comments now.

    What is the trend in temps since the medieval period? What Phil Jones say when interviewed by the BBC?

  10. daddy dave

    You have just shown you have little idea of what the topic actualy is.
    The topic is significance testing versus effect size (“oomph”).
    Yet somehow, in your brain, this is about the German economy post-WWII. Slightly more on track, you seem to think that it’s about statistical power, but statistical power is totally orthogonal to effect size.

  11. dover_beach

    Homer thinks the sample is too short if we begin in 1995, he prefers 1975. Wouldn’t the best method be to include the significance levels for a variety of scales decadal, centennial, and millenial? If 15 years can be characterised as noise, so can 30 years, etc.

    But I wonder if this is necessarily the case for the last 15 years when we consider that GHG concentrations have been steadily rising over this same period. If you have persistent increases in GHG concentrations in the atmosphere and no statiscally significant increase in GAMT then I think the latter cannot be so easily brushed aside.

  12. TerjeP (say Tay-a)

    Sinclair – you note how the statistical significance changes for the BBC questions if we slightly lower the bar on what is statistically signiicant. However you have not indicated how and why the answers have or lack oomph. Can you share your thoughts on this.

  13. Rafe

    The level of significance should always be reported – 90%, 95%, 99% whatever. But you don’t judge on the “oomph” and the stat sig alone, you also have to look at a whole raft of assumptions that are built into the analysis and assess their plausibility, and they way that different assumptions would have altered the result (both the oompy and the sig).
    They should be putting out several scenarios with different assumptions clearly specified and not just one plot of results, as though that represents the simple truth of the matter.
    People might have noticed that very serious questions have been raised about data quality and if you are not a true believer then you need to take that issue seriously.

    On the timespan, of course a longer span makes it easier to get stat sig trends but if the trend is very small then you are back to the Monckton and Lomborg line that there is warming in the medium to long term but it is not enough to justify a major policy response.

  14. Sinclair Davidson

    Terje – I don’t know if they do or don’t have oomph. Jones doesn’t say in his answers. Taken at face value it seems that over 100 years we should expect about 1.2C more on average than now (from question B).

  15. PSC

    Do some ACFs and it’s pretty clear an ARMA model will work better. AR1 acf has an exponential backoff pattern, and you don’t see that in the temperature data. Try ARMA next time – I played with ARMA(1,1) to good effect.

    But that’s by-the-by, not an important criticism.

    It’s pretty clear that the temperature data is quite autocorrelated and something like 10-15 years is the minimum time-span you need to get any kind statistically significant effect out, ARMA or AR1. Or alternatively expressed, there seems to be some kind of internal noise in the system over a 10-15 year period that overwhelms the signal, but longer time periods show a clear signal.

    Or more simply: the reason there’s no “ooomph” is that the sample size is too bloody small, and if you want to see “oomph” make your sample size bigger.

  16. daddy dave

    Or more simply: the reason there’s no “ooomph” is that the sample size is too bloody small, and if you want to see “oomph” make your sample size bigger.
    .
    Yet Jones thought it was a legitimate analysis, given the data.

  17. Sinclair Davidson

    PSC – I agree on the sample size. We need long time series data for this sort of thing (and we need to avoid Lindley’s paradox too). McCloskey isn’t talking about sample size issues when she’s on about oomph. The other thing is Jones should have spoken about that and said more than just a passing comment.

  18. JC

    Sorry , but I’m not buying into this stuff that we can’t derive any inference from short-term climate. Especially now since “the science is settled”.

    We know how much crap has gone into the atmosphere. We can also have a good handle on material events such as the pacific oscillation etc.

    We have all that shit and yet the science community is able to impute estimates and come out with a finding.

    Perhaps the reason is that they don’t like what’s come out.

    This is just freaking bullshit.

    If they can’t make reasonable estimates on short term, they can’t do it for long term either.

  19. Sinclair Davidson

    JC – we wouldn’t do business cycle research on three weeks worth of data, so doing climate change on a small time series is a bit dodgy.

  20. Butterfield, Bloomfield & Bishop

    Mark,

    go away Tooze has Germany in full employment in 1936 you fool.

  21. Sinclair Davidson

    Guys – let’s restrict any Tooze related discussion to the open forum.

  22. TerjeP (say Tay-a)

    At a guess I’d say you would be hard pressed to get anything more than a single data point over a three week period. In many ways measuring climate data is easier than measuring GDP or inflation.

  23. JC

    Sinc:

    Sure, 3 months sure.

    Business cycle is 7 odd years, right? Climate is supposed to be about 30.

    You certainly can begin forecasting the economy after 2 1/2 years. While I’m more than a little staggered the science is settled crowd suggest they aren’t able to derive a trend etc after 10 odd years.

  24. Sinclair Davidson

    All up, it’s not unreasonable to argue that we need a lot of data to undertake the kind of analysis that ideally should be done.

  25. JC

    Sinc:

    they make all sorts of adjustments to temperature data sets in regards to ground stations to the point that I honestly think is so fucked up we have no idea what they’re doing.

    Of course it was never audited so we don’t know even if they are using standardized techniques around the world.

    The point is that if they are making adjustments to ground station data year to year for such things as urban heat island effect, why can’t they make adjustments to smooth out for 10 years?

  26. Butterfield, Bloomfield & Bishop

    well that is incorrect we do know why there are adjusted temperature readings.

    There is even a paper on it for the US after one person made inaccurate statements on the subject.
    They even congratulated him on raising the topic despite getting the gist of it badly wrong.

  27. Sinclair Davidson

    JC – I agree these guys have done some dodgy stuff and we can’t trust them. 100 percent. But that doesn’t mean that there isn’t important work to be done and that it shouldn’t be done properly.

  28. rog

    Temperature is an indicator – sea level rise is a reality

  29. PSC

    JC – “If they can’t make reasonable estimates on short term, they can’t do it for long term either.”

    You can’t predict the outcome of a single dice roll with any skill.

    You can predict the average outcome of 1000 dice rolls with good skill.

    Your statement is false for a great many physical systems.

  30. Sinclair Davidson

    rog – today isn’t a good day to mention sea-level rises.

  31. daddy dave

    You can predict the average outcome of 1000 dice rolls with good skill.
    .
    That’s only true because we have perfect information about how the dice is constructed. Specifically it has six sides and is perfectly symmetrical, so all have an equal chance of landing face up.
    .
    However with climate science we’re using data to infer a model of the underlying system characteristics, then using that model to generate further data about the future climate. What JC, I think, senses is that a busted model is a busted model and will throw erroneous predictions at both the short term and the long term.
    .
    Plus it may be false for a great many systems but is true for others. It depends on if errors cancel out (such as in dice) or multiply (such as in… weather?).

  32. daddy dave

    correction… I should have said “if errors wash out” not “cancel out” for dice.

  33. Sinclair Davidson

    SRL – fixed. thanks.

  34. JC

    You can’t predict the outcome of a single dice roll with any skill.

    Ummm yes I can. I can predict there is a 1/6 chances that 2 will be the next roll.

    You can predict the average outcome of 1000 dice rolls with good skill.

    That’s not skill, doofus, it’s luck. The only skill that comes in is how you play a game with the odds being placed in reference to a bet.

    Wake up.

    —————–

    How exactly does a roll of the dice get into a discussion on prediction short and long term climate if the models the science is based on is settled?

  35. conrad

    Actually, the only way you can determine the extent that effect size and effect significance are important is based on an a-priori theoretical model. If, for example, your model predicts effects with lots of non-linearities and feedback, then tiny but significant effects can be important.
    .
    For example, if you assume that snowballs rolled in certain circumstances pick up snow based on their surface area and you start with a small snow-ball, then, at the start of the snowballs trajectory, you won’t notice much difference. Alternatively, as it gets bigger, it will pick up more and more snow, since its surface area will keep growing.
    .
    So what is more important here? The small effect size early on, or the large effect size later?
    .
    Now that might be a silly example, but this is how cancer works, which is why it is recommend that you get screened before big effect sizes can be found. It’s also the idea of feedback in the climate system — once effects are big enough, it may be too late.

  36. JC

    That’s fine Conrad. However we’re been sold something (insurance) which is I believe wrongly premised.

    We’re not really buying insurance at all as the quantification of the expected losses in not quantifiable.

    We should really be buying an out of the money put option.(Thanks SRL)

  37. PSC

    JC – you’ve misunderstood me:

    “The average outcome of two dice rolls will be 3.5 +/- 0.5”

    and

    “The average outcome of 1000 dice rolls will be 3.5 +/- 0.5”

    It’s clear the second statement is going to be true a lot of the time, and the first comparatively less of the time.

    There’s a lot of physical systems where you can make stronger statements about long term processes than short term processes.

    Second, I thought the whole point of this discussion was stochastic processes – e.g. AR processes. I can reformulate my example in terms of an AR process if it will make you happy.

    Last, which climatologist ever said “the science is settled”? I’ll bet you can’t find any who has said the words “the science is settled” in the sense of saying that there are no unresolved issues at all in any area of climate science.

  38. rog

    What are you trying to imply Sinclair, that sea levels are not rising?

  39. Sinclair Davidson

    I’m saying that claiming the sea-levels are rising on the same day as a paper making that very claim is withdrawn undermines your argument.

  40. C.L.

    Sorry, Sinclair. I posted that Guardian link on the OT. I don’t always have time to follow all the threads. 😳

  41. PSC

    Daddy Dave – you’ve got the order wrong. Climate models (e.g. EBMs/GCMs etc) are physics based, they’re not stochastic models.

    The data we’re talking about is used to verify the output and discover model deficiencies, it’s not used as an input. This data is not used to infer a model of the underlying system characteristics.

    Climate models are built on things like Newton’s second law, ideal gas equation, radiation parametrizations etc. Climate history is not used as an input, it’s used for checking the output.

  42. JC

    I didn’t misunderstand at all, PSC. You’re flattering yourself.

    You’re first comment appears wrong or sloppily written.

    I’ll bet you can’t find any who has said the words “the science is settled”

    Jim Hansen has said that to the extent of suggesting we should try executives from emissions producing firms for crimes against humanity.

    But then on second thoughts you’re right. He shouldn’t be used as an example as he’s not formally trained in climate science.

  43. PSC

    JC – my sincere apologies for writing a comment sloppily so as to lead to your misunderstanding me. I take it from your lack of comment on the more precisely phrased version that you now presumably do understand what I’m getting at and have no disagreement with it?

    Also if you have the chance could you address the phrase: “in the sense of saying that there are no unresolved issues at all in any area of climate science” which you deleted from my comment. I’ve read comments by Hansen where he addresses uncertainty in areas of the climate system at length.

  44. conrad

    “We’re not really buying insurance at all as the quantification of the expected losses in not quantifiable.”

    You don’t really need to quantify some sorts of growth models, since the end point is essentially infinite (or saturates at a very high level). Think of rabbits. If you don’t shoot the first 6, you have a few billion in a couple of years time, which is why you pay your insurance money to either someone that shoots them or buy a gun.

  45. JC

    PSC:

    You seem to want to project yourself as a thinking person’s intelligent person even if at times you post sloppy comments about the roll of a dice.

    Hansen has stated explicitly so… that people involved in emitting industries should, be jailed for crimes against humanity.

    Do you think a lunatic saying those sorts of things would have any doubts about whether “the science is settled”.

    Throw the dice on this one and see if it rolls on 2?

  46. JC

    Corad:

    I think your analogy that climate science risk is analogous to rabbits screwing themselves is a little silly.

    We’re trying to manage risk of which there are uncertainties on both sides.

    1. risk of agw and the losses we sustain are unquantifiable.

    2. The cost involved are particularly high and they become sunk costs and forever lost.

    It’s not an insurance policy as no one but the loons like Hansen and his brigade at unrealcliamte.org are peddling eternal damnation.

  47. daddy dave

    Climate models are built on things like Newton’s second law, ideal gas equation, radiation parametrizations etc. Climate history is not used as an input, it’s used for checking the output.
    .
    If that was true, you could figure out the average annual rainfall average temperature at any location, such as Perth or Costa Rica, from first principles. However that isn’t so.
    What does this tell us?
    This tells us that the starting point for modelling climate is not “Newton’s Second Law” and sundry formulas bequeathed to us by geniuses of yore, but a large database of observations.

  48. conrad

    No it isn’t JC. I’m just pointing to a type of model which is well used, well accepted, and the properties well known. Such models are also common in some areas. It was mainly a comment on why there are no general laws that allow one to determine what is more important, effect size or effect strength.
    .
    Whether some aspects of the climate fall into this genre I wouldn’t know. However, if they do, then the consequence and course of action should be fairly obvious. If they don’t, then obviously one needn’t worry.

  49. Sinclair Davidson

    Climate history is not used as an input, it’s used for checking the output.

    Seems a bit backwards to me – but I suppose each discipline does what it can. Is this due to lack of long time series?

  50. Sinclair Davidson

    Conrad – I thought the AGW crowd promoted tipping points because that allowed an argument for an ETS vis-a-vis a carbon tax.

  51. daddy dave

    Climate history is not used as an input, it’s used for checking the output
    .
    In other fields, it’s called “testing theories.” In climate science it’s called “reassuring ourselves that we are right.”

  52. PSC

    Daddy Dave – I can only suggest getting yourself a couple of good textbooks on weather and climate.

    “A Climate Modeling Primer” by McGuffie is a pretty good overview. “Atmosphere, Ocean and Climate Dynamics” by Marshall is great, and there’s an online course as well.

    If you doubt me that Newton’s second law is part of the starting point, get a decent text on climate modeling, look up the “primitive equations”, and you’ll see there a statement relating momentum to force.

  53. daddy dave

    I believe you that it’s a part of the starting point. I find it hard to believe that you can derive the current state of the Earth’s climate from first principles. Without peeking, as it were.

  54. PSC

    “Seems a bit backwards to me – but I suppose each discipline does what it can. Is this due to lack of long time series?”

    Nope – it’s due to having a handy predictive tool called “physics” which is the starting point for climate modeling. Of course to build a practical model you have to throw bits of physics away and parameterize other bits, and you have to keep/throw away/parameterize the appropriate bits otherwise you’ll get a poor model.

    There’s a number of techniques you can use to check if you’ve parameterized/thrown away the appropriate bits. Checking the results of the model against historical data is one of them.

  55. Sinclair Davidson

    Sounds like general equilibrium modelling. I’m a bit suspicious of GE modelling – I always suspect they make up the bits they don’t know. It can be very good for small problems but can be very dodgy for bigger problems IMHO.

  56. JC

    PSC:

    Please. Models are about as useful as a dart board.

    There’s a thing I’ve learned in all my time in trading. With the exception of very, very, few people, most traders that rely on models usually don’t know how to trade.

    The same applies in science. Anyone that says their model is predictive in a non linear chaotic system is a charlatan. Period. In other words, just as it’s a form of crap trading, climate science models are crap science.

    If we say that there is established science that tells us that too much carbon in the atmosphere is a bad thing or could lead to some indeterminate problems, well that’s another story. But don’t sell me the Brooklyn bridge by even suggesting these worthless models have any predictive abilities as they don’t: not in a non-linear chaotic system like the global climate.

    By the way you seem to have avoided the last point about Hansen.

  57. Sinclair Davidson

    PSC – did you say you could replicate Jones’ results with an arma (1,1) or just that it would be a better overall fit? When I run an ARMA(1,1) on the post 1995 sample the time trend p-value is nowhere near 5 percent, or even 10 percent.

  58. JC

    PSC:

    I’ll give you all the data you require from any source. I’ll buy the data on every single trade that has happened in Bank of America stock.

    Please tell me where that stock will be in 10 years time within say 2%.

  59. daddy dave

    Nope – it’s due to having a handy predictive tool called “physics” which is the starting point for climate modeling.
    .
    climate scientists invoking physics as the “starting point” is a bit like meeting some guy at a party you don’t know who keeps telling you he has a famous second cousin on his mother’s side.

  60. daddy dave

    There’s a number of techniques you can use to check if you’ve parameterized/thrown away the appropriate bits.
    .
    What you don’t seem to understand, PSC, is that the end result of this process means that you’ve fitted the model to the data. That’s okay. That’s an entirely legitimate thing to do, in and of itself, . but it’s very important to understand that that is what has been done.

  61. “Please tell me where that stock will be in 10 years time within say 2%.”

    When stocks obey the rules of physics and not the vagaries of psychology, then your analogy will make sense. Since they don’t, your argument comparing physics models to trading models fall flat on its face.

  62. conrad

    “Conrad – I thought the AGW crowd promoted tipping points because that allowed an argument for an ETS vis-a-vis a carbon tax.”
    .
    Beats me unfortunately — although there’s nothing wrong with tipping points, as many dynamic systems have them. They are of course exceptionally difficult to predict accurately in even simple systems (so they’re fun to try and model accurately if you’re into that sort of stuff), which is part of the problem really.
    .
    DD:”What you don’t seem to understand, PSC, is that the end result of this process means that you’ve fitted the model to the data”
    .
    So what — it’s how much you can fit that matters. What you have is a model with some with some underlying equations that can only predict a certain subset of the state-space, so there’s no a-priori reason you can get an accurate fit, even with any possible parameter set. It’s not like a greedy model where you can just keep on sucking up bits of variance until everything is accounted for.

  63. JC

    Oh bullshit, jarrah.

    They’re both predictive models with a certain number of variables.

    Stocks don’t obey the laws of physics but they obey laws of economics and finance etc.

    Both operate in chaotic, non-linear environments.

    To suggest we can’t predict in one sector but then place huge bets in the other sector which is also non-predictive is absurd.

  64. conrad

    “Models are about as useful as a dart board.”

    Maybe you just don’t understand modeling well, or perhaps they’re just no good in your domain (which I doubt incidentally — what do you think all of these super fast trading system use?). It’s how we understand innumerable aspects of the world these days, and whether you happen to like that or not isn’t going stop the increasing use of them (only available computer power will).

  65. daddy dave

    So what — it’s how much you can fit that matters.
    I agree Conrad, it’s a bootstrapping exercise. I was very explicit that there’s nothing wrong with it. However it’s a very different beast than predicting 1000 rolls of a dice, or even calculating orbital trajectories.
    These things can’t and shouldn’t be compared to something like climate science.

  66. daddy dave

    Conrad, here’s a multiple choice question.
    Climate science attracts:
    a) mathematically gifted nerds
    b) people who love nature and want to change the world.
    .
    Then, put those people (whether a or b, my mind is open) in charge of huge multivariate data sets. What will be the result?

  67. “They’re both predictive models with a certain number of variables.”

    Yeah, and in one of them, most of the variables are facets of human behaviour.

    Find me an atom that is self-aware, forms an internal model of the world, can change its behaviour, tries to anticipate the behaviour of other atoms, and is susceptible to following trends set by other atoms, and I’ll start to believe your argument.

    Don’t you remember my post on this subject?

    “then place huge bets in the other sector which is also non-predictive is absurd.”

    Do you fly in planes? Then you are placing a HUGE bet that you won’t die by doing so – based on the physical models that aeronautical engineers use.

  68. JC

    Yeah, and in one of them, most of the variables are facets of human behaviour.

    So what. Human behaviour in that setting is difficult to predict because it functions in non-linear behaviour and is chaotic

    Find me an atom that is self-aware, forms an internal model of the world, can change its behaviour, tries to anticipate the behaviour of other atoms, and is susceptible to following trends set by other atoms, and I’ll start to believe your argument.

    Find me an atom with a nucleus and little bits flying around it and I will predict that those little bits will do another orbit within a reasonable degree of certainty. That’s hardly the type of models we’re talking about though, is it? Human behaviour in the setting I described is not predictive as there are far too many variables and a lot of them can’t be predicted. Climate models are basically voodoo science to use a term that was first uttered by the soft porn author and head of the IPCC.

    Don’t you remember my post on this subject?

    No, But I’ll read it later, although I don’t think I’ll learn much more.

    Do you fly in planes? Then you are placing a HUGE bet that you won’t die by doing so – based on the physical models that aeronautical engineers use.

    Jarrah. It’s entirely the wrong analogy. I wouldn’t have flown in the first jet like you’re asking me to rely on the first climate models, but I will fly in jets that transport around 1 billion people a year from one place to the next because I have a certain degree of confidence (around 3,000,000: 1) that the plane will take off, cruise and land safely. I’m predicting the safety of planes as I have the data to tell me how safe they are within the parameters they are being used.

    That compares to the current situation where we don’t have verified back testing other than what people like Phil Jones or Gavin Schmidt tell us and the data isn’t audited and there isn’t enough proof they are predictive. See the past decade for their results.

    Here’s Richard Lindzen pretty much dissing models (climate ones)

    KERRY EMANUEL’S Feb. 15 op-ed “Climate changes are proven fact’’ is more advocacy than assessment. Vague terms such as “consistent with,’’ “probably,’’ and “potentially’’ hardly change this. Certainly climate change is real; it occurs all the time. To claim that the little we’ve seen is larger than any change we “have been able to discern’’ for a thousand years is disingenuous. Panels of the National Academy of Sciences and Congress have concluded that the methods used to claim this cannot be used for more than 400 years, if at all. Even the head of the deservedly maligned Climatic Research Unit acknowledges that the medieval period may well have been warmer than the present.

    The claim that everything other than models represents “mere opinion and speculation’’ is also peculiar. Despite their faults, models show that projections of significant warming depend critically on clouds and water vapor, and the physics of these processes can be observationally tested (the normal scientific approach); at this point, the models seem to be failing.
    Finally, given a generation of environmental propaganda, a presidential science adviser (John Holdren) who has promoted alarm since the 1970s, and a government that proposes funding levels for climate research about 20 times the levels in 1991, courage seems hardly the appropriate description – at least for scientists supporting such alarm.
    Richard S. Lindzen
    Cambridge
    The writer is Alfred P. Sloan professor of atmospheric sciences at the Massachusetts Institute of Technology.

    As Lindzen suggests belief in these models does basically resembles a form of religious faith like doomsday cults. Get out of that cult while you have a chance

  69. Rafe

    Jarrah, where is the physical model of the earth and its climate that is comparable to the models that aerospace engineers use?

  70. “Jarrah, where is the physical model of the earth and its climate that is comparable to the models that aerospace engineers use?”

    There isn’t one. I was trying to illustrate the difference between trading models and physics models, which JC has declared to be basically the same, and also get him to think about other models that he relies on (perhaps unwittingly).

    “I wouldn’t have flown in the first jet like you’re asking me to rely on the first climate models, but I will fly in jets that transport around 1 billion people a year from one place to the next because I have a certain degree of confidence (around 3,000,000: 1) that the plane will take off, cruise and land safely.”

    That’s absolutely fair. Climate models are not equivalent to wind-tunnel simulations, for example. But they are related to each other in ways they are not to trading models, as outlined above – people’s behaviour is different to atoms’ behaviour.

    “Here’s Richard Lindzen”

    Relying on Lindzen is not a good bet. I prefer the 97% of atmospheric scientists who disagree with him. But that’s just me (and every scientific body on the planet), so don’t let that dissuade you.

  71. JC

    Oh come on. You may not agree with Lindzen but he’s one of the best scientists in the field.

    Furthermore I’m not concerned if Joe blogs or Andy Pitman agree with Lindzen. I’m more concerned with what he has to say.

    (By the way I think you’d find that the 97% of the scientists in your poll 🙂 would agree with 90% of what Lindzen says.)

  72. rog

    Sinclair, the paper being withdrawn has not stopped sea level from rising, it has thrown into question the rate of sea level rise. Once reworked it may well support an increase in the rate of sea level rise.

  73. JC

    Christ you’re stupid Rog. You really are a dense as twenty bricks in a row.

    Sinclair didn’t say sea levels haven’t risen or aren’t rising, you numbnut.

    He posted you to a link in the Gordian that the author is now saying the paper the IPCC used to predict the numeric value of sea level rise is being withdrawn.

    That puts into question the IPCC’s prediction and with it one of the major concerns various economic models such as Stern and Stern’s mini-me Ross Garnaut’s reports relied on to make their appalling economic case.

    This is as serious as it could be, you ungenius.

  74. JC

    It also puts into question Penelope Wong’s warning that all the Sydney suburban beaches will be under water by april 17 2101.

    🙂

  75. “I’m saying that claiming the sea-levels are rising on the same day as a paper making that very claim is withdrawn undermines your argument.”

    False. The paper made a prediction about future sea-level rises that had serious methodological flaws that necessitated its withdrawal. The fact of sea-level rises to date is indisputable.

    “That puts into question the IPCC’s prediction”

    False. You would know this if you’d read the article – the paper disagreed with the IPCC, forecasting greater sea-level rises. It was a bad paper, but we are yet to see in which direction it was wrong (if the errors substantially change the conclusion at all).

  76. JC

    The study, published in 2009 in Nature Geoscience, one of the top journals in its field, confirmed the conclusions of the 2007 report from the Intergovernmental Panel on Climate Change (IPCC). It used data over the last 22,000 years to predict that sea level would rise by between 7cm and 82cm by the end of the century.

    At the time, Mark Siddall, from the Earth Sciences Department at the University of Bristol, said the study “strengthens the confidence with which one may interpret the IPCC results”. The IPCC said that sea level would probably rise by 18cm-59cm by 2100, though stressed this was based on incomplete information about ice sheet melting and that the true rise could be higher.

    False. You would know this if you’d read the article

    Read the first para and this time try to apologize.

  77. JC, are you blind? How is predicting 7-82 the same as 18-59? Are you pulling a Lambert here? 😉

  78. JC

    Jarrah:

    are you reading the same article.

    1. The IPCC predicted 18 to 59

    2. The paper which assisted the IPCC in its prediction was predicting 7 to 82.

    As the paper has been pulled it puts into question the IPCC’s prediction.

    Are you saying the paper was pulled?

  79. rog

    You continue to defy logic JC.

    Arguing against you only serves to amplify your gross stupidity.

  80. “As the paper has been pulled it puts into question the IPCC’s prediction.”

    How so? The IPCC prediction is based on other papers. This one was supposed to have a result that incorporated and exceeded the IPPC prediction, but turned out to be wrong in an unknown direction.

    That said, 7-82 isn’t much of a prediction or an advance of the science. Expanding the probable spectrum of outcomes is hardly improving matters. It’s like going from “we’re not sure” to “we’re REALLY not sure”. Maybe fixing the errors will help this, I don’t know. What I do know is that sea-level rises are a reality that no-one can deny. Not even CL, but he’ll do the usual of simply ignoring it and continue posting news reports about snowstorms as if they meant anything.

  81. JC

    How can you be this blase` about the paper that was pulled and then suggest it doesn’t matter as there are others. It’s like the black knight losing his limbs.

    It’s one of the top papers used and although the parameters are wide it speaks about nothing of the references points used from the paper to help the IPCC derive its narrower band.

    Here:

    The study, published in 2009 in Nature Geoscience, one of the top journals in its field, confirmed the conclusions of the 2007 report from the Intergovernmental Panel on Climate Change (IPCC).

    Did you read the that paragraph which is the second time I’ve pointed it to you now.

    To paraphrase: it was used to confirm the conclusion of the IPCC.In other words the narrower band is now disputed.

    Do you get it?

  82. JC

    Rog:

    Stop the trolling please.

  83. JC

    Jarrah:

    Aren’t you a little more heartened by the fact that scientists are now pulling papers when they find mistakes rather than having to rely on the good people at say climateaudit to eventually get to them and tear the paper limb from limb. I am.

    It means this cliamtegate scandal, sites like EU referendum getting a pliers to Doc Pach’s nose and tightening up is having an effect.

    The science is opening up for fear of getting a red hot web rod jammed up the butt.

    This is a good thing.

    In fact i applaud the dudes that pulled the paper.

    Can it be worse after further work? Sure it can. It can also be better too. However the point is that we don’t really know as yet despite the science being block buster settled.

  84. “It’s one of the top papers used…’The study, published in 2009 in Nature Geoscience, one of the top journals in its field, confirmed the conclusions of the 2007 report'”

    It hasn’t been “used” – it came two years after the IPCC report!

    There are lots of papers. The IPCC looked at all the ones published to 2007, and predicted a certain range. A brand-new paper came out in 2009, predicted a different range, but was wrong (though it’s not known if it under- or over-estimated) and has been retracted. It’s a total non-event.

    You are falling prey to the unfortunately common failing of many sceptics and denialists – one paper or piece of evidence is (to y’all) proof positive that AGW is wrong/unsettled/exagerrated/whatever. Yet all the other papers (the vast majority) are somehow worthless or not relevant.

    I think it’s called confirmation bias.

    Have you looked at the satellite measurements yet? You seem to be ignoring them.

  85. JC

    Fme:

    If they pulled the paper that was used to confirm the IPCC report was does that say about the IPCC’s findings?

    (I may not have been clear earlier).

    Do you think it would warrant further investigation into the IPCC’s own findings and how they reached those conclusions?

    I not falling prey to anything at all. All I want to see is that the data is verified and audited and the findings in the IPCC are gone over with a fine tooth comb as a result of not knowing what the soft porn author has done.

    The IPPC is basically discredited now especially when we find that loons like the WWF and Greenspace were involved too.

    How reliable are the sat measurements? I can’t see how they can be.

  86. PSC

    Sinclair – just that it would be a better overall fit.

  87. PSC

    “What you don’t seem to understand, PSC, is that the end result of this process means that you’ve fitted the model to the data. That’s okay. That’s an entirely legitimate thing to do, in and of itself, . but it’s very important to understand that that is what has been done.”

    Fair enough DD, if that’s your view. Still, I’d suggest you getting a copy of the texts above and having a good look at how these models are constructed and tuned in practice. There’s several techniques other than “check against historical data” which can be used to build the model.

    And Sinclair – are you sure you’re using the ARMA model right? I did this about 18 months ago, but from memory I was getting p-values below 0.05 after 13 years of data. If I get a chance I’ll see if I can find it and dig it up.

  88. conrad

    “Then, put those people (whether a or b, my mind is open) in charge of huge multivariate data sets. What will be the result?”

    Some fun modeling projects that very few of the general public will be able to understand, no matter how much they want to. Of course, this is true of most complex systems modeling, which is of course how we know about things as diverse as traffic flow, neural spiking, growth cycles of various natural systems etc. . It also no doubt underlies lots of proprietary software that we don’t know about, like all those algorithms that create most trades on many stockmarkets.

  89. Rafe

    So the paper predicted a rise in the range of 7 to 82. So if we get rise at all they can still say they got close, within 7, which is not had in view of the complexity of the system:)

  90. Sinclair Davidson

    PSC – I ran an ARMA (1,1) on the temp_Anomaly series just from 1995M01 trying to replicate the answer to question B above. For just that period the p was about 14 percent.

  91. rog

    Who said “the science being block buster settled.”?

    Nobody, except blockheads like you

    All these studies detail uncertainties and variables.

    What is settled is the records of sea level rise

  92. rog

    “If they pulled the paper that was used to confirm the IPCC report was does that say about the IPCC’s findings?”

    Nothing

    The paper reinforced the conclusions made by the IPCC.

  93. dover_beach

    Yes, sea levels have been rising since we’ve emerged from the LIA; nothing startling about that.

    So far as GAMTs are concerned, doesn’t the absense of statistically significant warming since 1995 suggest that GHGs only ‘lukewarm’ the climate rather than anything else; remember AGW and particularly catastrophic AGW depend upon GHGs ‘dominating’ the climate. There is also that recent working paper that finds only a temporary increase in GAMT rather than a permanent increase resulting from increasing CO2 in the atmosphere:

    http://economics.huji.ac.il/facultye/beenstock/Nature_Paper091209.pdf

  94. daddy dave

    “Then, put those people (whether a or b, my mind is open) in charge of huge multivariate data sets. What will be the result?”

    Some fun modeling projects that very few of the general public will be able to understand, no matter how much they want to.
    .
    no, the result will be a bunch of stupid hippies staring at a screen, knowing that the fate of the world rests upon the results they get out of that database.

  95. conrad

    “no, the result will be a bunch of stupid hippies staring at a screen, knowing that the fate of the world rests upon the results they get out of that database.”

    That doesn’t make them wrong.

  96. John H.

    So far as GAMTs are concerned, doesn’t the absense of statistically significant warming since 1995 suggest that GHGs only ‘lukewarm’ the climate rather than anything else; remember AGW and particularly catastrophic AGW depend upon GHGs ‘dominating’ the climate. There is also that recent working paper that finds only a temporary increase in GAMT rather than a permanent increase resulting from increasing CO2 in the atmosphere:

    Hey DB,

    There was a Nature paper released a few weeks ago which argued that the CO2 forcing effect may be considerably weaker than previous assertions.

  97. daddy dave

    It increases the chance that they are wrong.

  98. John H.

    It increases the chance that they are wrong.

    It does, and if that paper holds up to scrutiny it could have major implications for policy. Virtually all policy is predicated on the idea that we can “turn back the clock” by controlling CO2. I’m sure or at least, that most hope scientists are aware of the fact that the changes we have introduced may very well make it impossible to turn back the clock.

    Note that a great deal of policy development, I think, relies very heavily on the CO2 forcing issue. If that fundamental assumption is problematic, we have to completely rethink the strategies. It is, for the time being at least, problematic!

  99. John H.

    Sorry, I’m typing very quickly and making too many mistakes: Should read.

    I’m sure, or at least hope, that most scientists are aware of ….

  100. Peter Patton

    jc/jarrah

    I have a very weakly formed hypothesis floating around in my head about the role played by Excel (Microsoft package) – or more precisely, the reliance of investment bankers/management consultants on Excel modeling – in the ‘GFC.’ Even more weakly, perhaps some similar speculation on the AGW debate might be rewarding.

  101. dover_beach

    There was a Nature paper released a few weeks ago which argued that the CO2 forcing effect may be considerably weaker than previous assertions.

    Evil, do you have the reference?

  102. John H.

    DB,

    Evil I am but in nature only.

    I thought you might be interested in that DB. See, I can read minds. Hmmm, starting my teaching degree tomorrow …

    As you may be aware I don’t study this stuff. As I asserted above I believe this is a very important assumption in the model.

    If you do look into this I would appreciate your view.

    http://www.nature.com/nature/journal/v463/n7280/edsumm/e100128-07.html
    CO2 feedback recalculated

    Climate warming tends to cause a net release of CO2, which in turn causes an amplification of warming. Estimates of the magnitude of this effect vary widely, leading to a wide range in global warming projections. Recent work suggested that the magnitude of this positive feedback might be about 40 parts per million by volume of CO2 per °C of warming. David Frank and colleagues use three Antarctic ice cores and a suite of climate reconstructions to show that the feedback is likely to be much smaller, with a median of only about 8 p.p.m.v. CO2 per °C.

  103. John H.

    DB,

    Just found this interesting comment by someone who has read the paper and it echoes my concerns:

    http://motls.blogspot.com/2010/01/nature-carbon-cycle-feedback-is-80.html

    Frank et al. explain that the uncertainty about the strength of this “carbon cycle feedback” constitutes approximately 40% of the uncertainty about the whole projected 21st century global warming. Because they have slashed more than 80% of this contribution which used to represent 40% of the total future warming, you may see that one third of the projected 21st century warming has to be deleted, too.

  104. dover_beach

    Cheers, Evil. I’ve seen no discussion of this to-date but its implications are significant. Here’s the abstract:

    The processes controlling the carbon flux and carbon storage of the atmosphere, ocean and terrestrial biosphere are temperature sensitive1, 2, 3, 4 and are likely to provide a positive feedback leading to amplified anthropogenic warming3. Owing to this feedback, at timescales ranging from interannual to the 20–100-kyr cycles of Earth’s orbital variations1, 5, 6, 7, warming of the climate system causes a net release of CO2 into the atmosphere; this in turn amplifies warming. But the magnitude of the climate sensitivity of the global carbon cycle (termed ?), and thus of its positive feedback strength, is under debate, giving rise to large uncertainties in global warming projections8, 9. Here we quantify the median ? as 7.7?p.p.m.v. CO2 per °C warming, with a likely range of 1.7–21.4?p.p.m.v. CO2per °C. Sensitivity experiments exclude significant influence of pre-industrial land-use change on these estimates. Our results, based on the coupling of a probabilistic approach with an ensemble of proxy-based temperature reconstructions and pre-industrial CO2 data from three ice cores, provide robust constraints for ? on the policy-relevant multi-decadal to centennial timescales. By using an ensemble of >200,000 members, quantification of ? is not only improved, but also likelihoods can be assigned, thereby providing a benchmark for future model simulations. Although uncertainties do not at present allow exclusion of ? calculated from any of ten coupled carbon–climate models, we find that ? is about twice as likely to fall in the lowermost than in the uppermost quartile of their range. Our results are incompatibly lower (P?<?0.05) than recent pre-industrial empirical estimates of ~40?p.p.m.v. CO2 per °C (refs 6, 7), and correspondingly suggest ~80% less potential amplification of ongoing global warming.

    You’ll notice that is an outstanding decrease in potential amplification that might have otherwise have been provided by the natural release of CO2 occurring as a result of warming.

    It also confirms my suspicion that the climate is far more robust than has been thought.

  105. dover_beach

    That is very interesting, Evil. So the effect of this result is to reduce by a 1/3 existing calculations of climate sensitivity per doubling. I hope people remember what I’ve been saying for at least 3 years now regarding CS.

  106. John H.

    Thanks DB,

    The first Q & A this year had Rudd speaking to 16 to 25 year olds. I was astounded when he said words to this effect … “Who do you trust, 4,000 white coated humorless scientists or a bunch of sceptics … ”

    Well der, if those scientists were making predictions when even before this study a critical causative factor in the models could not be accurately determined then what is going on!!!

    I saw that abstract weeks ago and sighed. I should be used to this by now, in areas that I do know something about I frequently encounter ideas that are just plain wrong yet very widely disseminated.

    On a completely different topic, this news release highlights a wonderful paper that has enormous implications for our understanding of what brains do. As I wrote to friends:

    “Most studies using imaging technologies are only functional at the gross level, the number of neurons measured is enormous and this creates all sorts of conceptual dilemmas, or at least it should. This study is a genuine breakthrough because it discovered something extraordinary. It appears that neurons can change their inputs and sensitivity to various frequencies of sound(this study is on the auditory cortex). The conventional view is that neurons have dedicated functions and respond only to specific frequencies. If further studies bear this out, Edelman’s mapping hypothesis is done for, modularity is dead but always good to see another nail in that coffin, and Goldberg’s gradential theory is on shaky ground anyway. So just perhaps this is Lashley’s mass action hypothesis revealed. Don’t know yet.”

  107. PSC

    Sinclair – found my old notes/model on this – it turns out I was basing this on crutem. I’ve redone it on HadCRU now. A few things:

    – 0.14 for 1995 to today looks toppy – I get 0.11 from ARMA, but we might be using different optimization codes to fit the ARMA component so that may account for some of the difference?

    – you definitely need ARMA(1,1). Observe the following eacf, constructed from residuals for an ols fit from the 1985 -> today data.

    [1] 1985
    AR/MA
    0 1 2 3 4 5 6 7 8 9 10 11 12 13
    0 x x x x x x x x o o o o o o
    1 x o o o o o o o o o o o o o
    2 x x o o o o o o o o o o o o
    3 x o x o o o o o o o o o o o
    4 x o x x o o o o o o o o o o
    5 x x o x x o o o o o o o o o
    6 x x o x x o o o o o o o o o
    7 x x o x x o o o o o o o o o

    AIC and visual examination of the acf and pacf all tells the same story. Different start dates tell the same story.

    Still it’s not perfect. You can clearly see the 1998 el-nino in the residuals, and there’s a wobble in lags 2-4 of the acf which looks a bit odd.

    – if you look at the residuals around 1995-1998 they’re quite skewed. Presumably the big el-Nino. P-values for less than 15 year-or-so windows are all over the place, greater than 15 years they settle right down. They seem very sensitive to the start point if you start after about half-way through 1994. I get a p-value of 0.046 if I start in Feb 1994.

    – I get very small and near-zero p-values going back before 1992.

  108. conrad

    “Virtually all policy is predicated on the idea that we can “turn back the clock” by controlling CO2”

    No it isn’t. It’s quite well accepted that the carbon cycle is pretty long (i.e., 150 years+ depending on how you want to measure it). This means that even if we stopped today, we wouldn’t see the clock turned back in our life time (indeed, we wouldn’t be close)

  109. John H.

    Not my point Conrad. My point was that once you introduce sizeable variables into a complex process you cannot be sure that the process will not shift in unpredictable ways. I think you made a similiar claim in relation to something else re effect size of “minor” variables.

  110. JC

    Conrad:

    Go back to the discussion we had earlier. You argue the alarmist side in that we need to insure for the worst possible scenario.

    What possible data do you have that would support that argument when the mass of scientific lit is about 2 degrees assuming static technology?

    Hardly a reason for bedwetting.

  111. Sinclair Davidson

    PSC – yes. I think we can rule out Jones using an ARMA model to calculate his trend from 1995. The p-values are too big. This is very annoying – reminds me of the Fuelwatch debacle. There was an empirical analysis done, but you couldn’t quite work out what it was and couldn’t quite back-engineer the result.

  112. PSC

    JC – “I’ll give you all the data you require from any source. I’ll buy the data on every single trade that has happened in Bank of America stock.

    Please tell me where that stock will be in 10 years time within say 2%.”

    Too hard. But there are other classes of securities.

    For instance suppose a fixed rate trust preferred with $1000 face value in a solid bank is maturing in 2020.

    Which statement do you have more confidence in?

    “The market price of the security on 1/1/2020 is $1000 +/- 0.1%”

    or

    “The market price of the security on 1/1/2015 is $1000 +/- 0.1%”

    Another case where long term prediction is a bit easier than short term prediction. And in a non-linear “chaotic” market.

  113. PSC

    On second thoughts scratch trust preferred – we might quibble about credit quality or whatever – let’s make them treasury bonds.

  114. JC

    JC – “I’ll give you all the data you require from any source. I’ll buy the data on every single trade that has happened in Bank of America stock.
    Please tell me where that stock will be in 10 years time within say 2%.”
    Too hard. But there are other classes of securities.

    Actually it’s not too hard PSC. It’s an impossible request, far different from “too hard”.

    For instance suppose a fixed rate trust preferred with $1000 face value in a solid bank is maturing in 2020.
    Which statement do you have more confidence in?
    “The market price of the security on 1/1/2020 is $1000 +/- 0.1%”
    or
    “The market price of the security on 1/1/2015 is $1000 +/- 0.1%”
    Another case where long term prediction is a bit easier than short term prediction. And in a non-linear “chaotic” market.

    Hahahhhaha nice try. So the atmosphere has an expiry date then? You’re really scared then.

    I think if you were going to be honest about your example, you would try to use a perpetual preferred, as that will theoretically expire when the earth disappears as a result of serious global warming in perhaps 2 billion years or so. So serious, it will disappear.

    PSC, please don’t play games like that, as it’s a little insulting.

  115. JC

    Treasury bonds are shitty idea for the same reason enumerated above.

    They have an expiry date which obviously has an effect on valuation that is markedly different to a stock.

    Stock are the best analogy PSC. The reason is stocks are a perpetual with an open rate of return instead on being locked into a rate.

  116. conrad

    “What possible data do you have that would support that argument when the mass of scientific lit is about 2 degrees assuming static technology”

    I’m happy with the scientific consensus, so you can argue with them instead.

  117. John H.

    From McCloskey’s
    The Unreasonable Ineffectiveness of Fisherian
    “Tests” in Biology, and Especially in Medicine

    “But the probability of the hypothesis, given the data, is not what has been tested. The probability that the person is normal, given a positive test for schizophrenia, is in truth quite strong—about 60%—not, as Fisherians believe, less than 3%, because, by Bayes’ Theorem . .. ”

    This is exactly the same argument I read in Info: The New Language of Science. It is too often ignored.

    Love the title of this work, clearly a play on the physicist Wigner’s famous essay: On the Unreasonable Effectiveness of Mathematics in Science.” Something very lose to that.

    Sadly, McCloskey’s paper seems to affirm something I read a long time ago in a Steve Jones'(geneticist) text, The Language of Genes:

    154

    There are two cultures in science: one (to which most scientists belong) uses mathematics and the other understands it.”

    226

    I sometimes illustrate of using family similarity to establish the importance of biology by asking what, of our attributes, is most similar among British parents and their children – or their sisters and their cousins and their aunts. The answer is bank-balance.”

  118. daddy dave

    John H…
    there are serious practical and theoretical problems with significance testing; and that’s one of them. The probability of the data given the hypothesis (e.g., .05) is not the same as the probability of the hypothesis given the data.

  119. John H.

    Last night I sent off that my McCloskey article to my friend in NY. He was trained in statistics by Cohen(McCloskey uses one of his examples) and recalled how Cohen applied his Power Test to studies in psychology and found a great many failed as valid conclusions. My friend also noted that when the statistical analysis did not achieve the precious .05, researchers did check their arithmetic but if .05 was reached, there was no checking of the arithmetic. Consequently simple arithmetic errors can easily slip through to the published data.

    My friend learnt all this a long time ago, before most of you lot were born. And yet the problem continues. Disturbing.

    Sinclair, thanks for the referral to McCloskey, right on the money

  120. Pingback: Oomph again at Catallaxy Files

  121. Pingback: Oomph again at Catallaxy Files

  122. Pingback: Saved or spent? at Catallaxy Files

Leave a Reply

Your email address will not be published. Required fields are marked *