Chris Carter, author of Science and Psychic Phenomena and other books, has sent me a detailed response to J.E. Kennedy's review and to some of the comments that have appeared on this blog. He's given me permission to publish it, so here it is in its entirety. (Everything that follows is from Chris.)
=========
Michael, this is directly from my book, Science and Psychic Phenomena:
“However, it later turned out that Milton and Wiseman had botched their statistical analysis of the ganzfeld experiments by failing to consider sample size. Dean Radin simply added up the total number of hits and trials conducted in those thirty studies (the standard method of doing meta-analysis) and found a statistically significant result with odds against chance of about twenty to one.
“The thirty studies that Milton and Wiseman considered ranged in size from four trials to one hundred, but they used a statistical method that simply ignored sample size (N). For instance, say we have three studies, two with N = 8, 2 hits (25 percent), and a third with N = 60, 21 hits (35 percent). If we ignore sample size, then the un-weighted average percentage of hits is only 28 percent; but the combined average of all the hits is just under 33 percent. This, in simplest terms, is the mistake they made. Had they simply added up the hits and misses and then performed a simple one-tailed t-test, they would have found results significant at the 5 percent level. As statistician Jessica Utts later pointed out, had Milton and Wiseman performed the exact binomial test, the results would have been significant at less than the 4 percent level, with odds against chance of twenty-six to one.” (p. 99)
Kennedy wrote: “The large number of methodological decisions for meta-analyses, like other types of post hoc analyses, provides great opportunity for researchers to consciously or unconsciously bias the results. The endless debates about different possible statistical tests, inclusion cutoff criteria, data trimming, data transformations, and so forth, have no convincing resolutions.”
We can easily see that Kennedy’s point is nonsense. Simply adding up the total number of hits and trials and then doing a then performing a straightforward t-test of significance – as Dean Radin did – has nothing to do with, as Kennedy puts it, “endless debates about different possible statistical tests, inclusion cutoff criteria, data trimming, data transformations, and so forth.” It is a straightforward method of doing statistical analysis that is taught in all first year statistics courses.
Table 1: Standard Ganzfeld Replications 1991-2003 |
||
Laboratory |
Sessions |
|
PRL, Princeton, NJ |
354 |
34 percent |
University of Amsterdam, Netherlands |
76 |
38 percent |
University of Edinburgh, Scotland |
97 |
33 percent |
Institute for Parapsychology, NC |
100 |
33 percent |
University of Edinburgh, Scotland |
151 |
27 percent |
University of Amsterdam, Netherlands |
64 |
30 percent |
University of Edinburgh, Scotland |
128 |
47 percent |
University of Gothenburg, Sweden |
150 |
36 percent |
University of Gothenburg, Sweden |
74 |
32 percent |
Totals: |
1194 |
34.4 percent |
Strangely, in an exchange with me in the book Debating Psychic Experiences, arch-“skeptic” Ray Hyman mentioned only one of these replication studies:
Instead of conducting meta-analyses on already completed experiments on the Ganzfeld, for example, the parapsychologists might have tried to directly replicate the auto-Ganzfeld experiments with a study created for the stated purpose of replication. The study would be designed specifically for this purpose and would have adequate power. In fact, such studies have been carried out. An example would be Broughton’s attempt to deliberately replicate the auto-Ganzfeld results with enough subjects to insure adequate power. This replication failed. From a scientific viewpoint this replication attempt is much more meaningful than the retrospective combining of already completed (and clearly heterogeneous) experiments. (emphasis added)
But is this replication attempt really “much more meaningful from a scientific viewpoint” than the combined results in a meta-analysis? If the true hit rate were 33 percent with 25 percent expected by chance alone, then the probability that a sample size of 151 will fail to yield results significant at the 5 percent level is 28 percent. In other words, Broughton’s failure to replicate with a sample that small is even less remarkable than flipping a coin twice and getting heads both times.
As an example of a replication study, Hyman could just have easily mentioned Kathy Dalton’s (1997) study using creative individuals, which achieved a hit rate of 47 percent. The odds-against-chance of this result is over 140 million to one. This closely replicated the auto-Ganzfeld results mentioned above (Schlitz & Honorton, 1992), which found a 50 percent hit rate for students from the Juilliard School. It also closely matched results from a study using primarily musicians (Morris, Cunningham, McAlpine, & Taylor, 1993), which found a 41 percent hit rate.
These figures should make the conclusion clear: the earlier results have been replicated by a variety of researchers in different laboratories in different cultures, with similar hit rates. Hyman (1996a) wrote: “The case for psychic functioning seems better than it has ever been…. I also have to admit that I do not have a ready explanation for these observed effects. (p. 43)” Hyman and the other “skeptics” have lost the Ganzfeld debate.
Meta-analysis of the Ganzfeld
However, instead of debating the merits of individual studies, what does the data considered as a whole tell us? Meta-analysis is designed specifically to answer this question, and Dean Radin (2006) has performed it on all Ganzfeld experiments (confirmatory and exploratory) performed over a 30-year period. He wrote:
From 1974 through 2004 a total of 88 Ganzfeld experiments reporting 1,008 hits in 3,145 trials were conducted. The combined hit rate was 32 percent as compared to the chance-expected 25 percent. This 7 percent above-chance effect is associated with odds against chance of 29 quintillion to 1. (p. 120)
Could the results be due to a file-drawer problem of unreported failures? Radin answers:
If we insisted that there had to be a selective reporting problem, even though there’s no evidence of one, then a conservative estimate of the number of studies needed to nullify the observed results is 2,002. That’s a ratio of 23 file drawer studies to each known study, which means that each of the 30 known investigators would have had to conduct but not report 67 additional studies. Because the average Ganzfeld study had 36 trials, these 2,002 “missing” studies would have required 72,072 additional sessions (36 x 2002). To generate this many sessions would mean continually running Ganzfeld sessions 24 hours a day, 7 days a week, for 36 years, and for not a single one of those sessions to see the light of day. (p. 121)
Ersby’s comments
"Kennedy’s argument is that meta-analyses offer enough subjective leeway to adjust results until the required figure is achieved. The Bem, Palmer & Broughton paper actually supports it, rather than refutes it. The BPP paper is, after all, a reworking of a previous meta-analysis with an additional inclusion criteria [sic] which had never been used before and has not been used since. This is exactly the kind of subjectivity that Kennedy is talking about.”
The “additional inclusion criterion” was simply the status of the studies used, that is, whether they were meant to be confirmatory or exploratory. Far from this never been used before, or being somehow “subjective,” this was actually a criterion specified in the joint communique written with skeptic Ray Hyman. As I wrote in my book:
“In their joint communiqué, Hyman and Honorton asked future ganzfeld investigators, as part of their “more stringent standards,” to clearly document the status of the experiment; that is, whether it was meant to merely confirm previous findings or to explore novel conditions. The problem with the Milton and Wiseman study was that it simply lumped all studies together, regardless of whether the status of each study was confirmatory or exploratory. In other words, Milton and Wiseman made no attempt to determine the degree to which the individual studies complied with the standard ganzfeld protocol as spelled out in the joint communiqué.” (p. 100)
Meta-analysis is essentially combining many smaller studies into one larger study in order to exploit the greater statistical power of larger sample sizes to detect effects with statistical significance. As such, it is only useful when all of the studies are of the same nature. This is indeed standard practice, and common sense.
Ersby adds:
“It’s worth noting that Standardness alone does not bring Milton & Wiseman’s meta-analysis up to statistical significance. In other words, the hypothesis that M&W’s poor results was due to including exploratory work is not supported by the data. It is only when the new data from 1997-1999 is introduced does the result reach significance.”
Simply not true. As I wrote above, had they simply added up the hits and misses and then performed a simple one-tailed t-test, they would have found results significant at the 5 percent level. Wiseman, by the way, did not dispute this when statistician Jessica Utts pointed this fact out at a conference.
“Which brings me to the other argument about the M&W: that Milton and Wiseman deliberately excluded the Dalton experiment to achieve a null result. But if you add the Dalton experiment to the M&W database, and use their method to calculate the effect size, it still doesn't achieve significance.”
Again, simply not true. It attains significance even without the Dalton experiment included.
“Lastly, Chris Carter's claim that Milton & Wiseman 'botched' their statistical analysis doesn't stand up to scrutiny. M&W used the same method as Honorton did in 1985. And, as Kennedy states, the method used by Carter is not considered a standard meta-analysis technique."
Again, not true. According to Honorton: "I calculated the exact binomial probability for each study and obtained its associated Z score." This means his z scores were indeed weighted by sample size, so no, Milton & Wiseman did not use the same method that Honorton did. And by the way, the exact binomial test (which uses probabilities and combinations of possible chance results and so explicitly takes into account sample size) most certainly is considered a standard technique.
No statistician would ever argue that sample size should not be considered in performing statistical tests. Sample size is crucially-intrinsic to proper statistical analysis, and I would not have expected Milton & Wiseman’s error from any competent first year statistics student. The error is not even sophomoric.
No wonder this person writes under a pseudonym.
------
Bem, D., and Honorton, C., 1994. “Does Psi Exist?”, Psychological Bulletin, Vol. 115, No. 1, pages 4 – 18.
Bem, D., Palmer, John, and Richard Broughton, 2001. “Updating the Ganzfeld Database: a victim of its own success?”, The Journal of Parapsychology, Vol. 65, September, pages 207-218.
Bierman, Dick J., 1995. “The Amsterdam Ganzfeld Series III & IV: Target clip emotionality, effect sizes, and openness,” Proceedings of the 38th Annual Parapsychological Association Convention, pages 27 – 37.
Broughton, Richard, and Cheryl Alexander, 1995. “AutoGanzfeld II: The first 100 sessions,” Proceedings of the 38th Annual Parapsychological Association Convention, pages 53 – 61.
Broughton, R.S., & Alexander, C.H. 1997. "AutoGanzfeld II: an attempted replication of the PRL Ganzfeld research." Journal of Parapsychology, 61, 209-226.
Carter, Chris, 2012. Science and Psychic Phenomena: the Fall of the House of Skeptics, Rochester, Vermont: Inner Traditions.
Collins, H.H., 1985. Changing Order: Replication and Induction in Scientific Practice. Beverly Hills, CA: Sage.
Dalton, K., (1997). “Exploring the Links: Creativity and Psi in the Ganzfeld.” Proceedings of Presented Papers: the Parapsychological Association 40th Annual Convention, pp. 119-134.
Harris, M., & Rosenthal, R. 1988. Human Performance Research: an overview. Washington, DC: National Academy Press.
Honorton, C., 1975. “Error some place!” Journal of Communication, 25, pages 103-116.
Honorton, C. 1985. “Meta-analysis of Psi Ganzfeld Research: A Response to Hyman.” Journal of Parapsycholgy 49: 51–91.
Honorton, C., 1993. “Rhetoric over Substance: the Impoverished State of Skepticism.” Journal of Parapsychology 57, pages 191-214.
Hyman, R., and Honorton, C., 1986. “A joint communiqué: the psi Ganzfeld controversy.” Journal of Parapsychology, 50, pages 351-364.
Hyman, Ray. 1991. “Comment” Statistical Science, 6, pages 389-392.
Hyman, Ray. 1996a. “Evaluation of Program on Anomalous Mental Phenomena.” Journal of Scientific Exploration, 10, pages 31-58.
Hyman, R. 1996b. "The Evidence for Psychic Functioning: Claims vs. Reality." Skeptical Inquirer, March/April 1996, pp. 24-26.
Morris, R., Cunningham, S., McAlpine, S., & Tayor R., (1993) "Toward replication and extension of autoGanzfeld results." Proceedings of the Parapsychological Association 36th Annual Convention, Toronto, Canada, pp. 177-191.
Morris, Robert, Kathy Dalton, Deborah Delanoy and Caroline Watt, 1995. “Comparison of the sender/no sender condition in the Ganzfeld.” Proceedings of the 38th Annual Parapsychological Association Convention, pages 244 – 259.
Parker, A., 2000. “A Review of the Ganzfeld Work at Gothenburg University.” Journal of the Society for Psychical Research..
Parker, A., 2003. “We ask, does psi exist?” Journal of Consciousness Studies, 10, No. 6-7, 111-134.
Parker, A., 2000. "A review of the Ganzfeld work at Gothenburg University." Journal of the Society for Psychical Research, 64, pp. 1-15
Radin, Dean. 1997. The Conscious Universe: The Scientific Truth of Psychic Phenomena. San Francisco: HarperCollins.
Radin, Dean. 2006. Entangled Minds. New York, NY: Pocket Books.
Rosenthal, R., (2002) "Covert communication in classrooms, clinics, courtrooms, and cubicles." American Psychologist, 57 (11), pp. 839-849.
Schlitz, M.J., & Honorton, C., (1992). "Ganzfeld psi performance with an artistically gifted population." Journal of the American Society for Psychical Research, 86, pp. 93-98.
Wright, T., & Parker, A., 2003. “An Attempt to Improve ESP Scores using the Real Time Digital Ganzfeld Technique.” European Journal of Parapsychology, 18, 69-75.
That's what's called a more than adequate defense. I'll be curious to see if there's a reponse. I've always considered Chris Carter to be extremely conscientious in his reporting.
Posted by: Anthony McCarthy | February 05, 2014 at 06:08 PM
Thank you, Chris. That is what I was saying to ersby - pooling the results and performing the exact binomial probability is The Way To Go. He had supplied some link of dubious quality saying that it is not done - which was obviously ridiculous.
One good reason to use exactly the same experimental design and protocols so you can pool trials/observations from many experiments *as if it were one big experiment*. Ersby went out of his way to try to obfuscate that this is so. Classic skeptic tactic.
As you say, he also obfuscated the issue of inclusion criteria (i.e. exploratory versus confirmatory).
That said, I do think that you opened the door just a crack by what I think is an oversimplification of the statistical methodology that M&W did use; which was stouffers Z. They did not simply take an average of the probabilities from each experiment. Saying that they did allows for an ersby to jump in with all kinds of techno goobeldy gook that would appear correct to a layman on superficial examination of available material. Then again I know that the ersby's of the world will do what they do regardless of open doors. IMO, a better approach would be to explain exactly the analysis of M&W and then walk through to the exact point of their departure from righteousness.
Thanks again and looking forward to any future work you produce.
Posted by: no one | February 05, 2014 at 06:50 PM
The fact that 128 trials conducted by the University of Edingburgh resulted in a 47 percent hit rate is pretty amazing. And the average hit rating of 34.4 percent is actually pretty good. If you could go to the racetrack and hit like that, you'd make a lot of money.
Anyway, I loved Chris Carter's books, glad to see his response here.
Posted by: Kathleen | February 05, 2014 at 07:49 PM
I'm curious: What was Broughton's hit rate? I hereby repeat my proposal that we designate "movement" skeptics with a capital letter (Skeptics), to avoid the need for sneer quotes. When we get exasperated we can switch to using Scoftics (scoffers masquerading as skeptics, as Tuzzi called them).
Posted by: Roger Knights | February 05, 2014 at 09:42 PM
Typo (delete "then doing a") in CC's:
I think the table would be improved with a 4th column showing the number of hits in each study. That way the bottom line would show the total number of trials and hits, and the reader could verify the arithmetic by calculating the hit rate himself.Posted by: Roger Knights | February 05, 2014 at 10:13 PM
I hope Carter responds to criticisms of Sudduth, do not you think?
http://subversivethinking.blogspot.com.es/2014/01/interview-with-analytic-philosopher-of.html
Posted by: Juan | February 06, 2014 at 05:25 AM
Thanks to Michael for posting this, and I appreciate Chris Carter taking the time to respond to an internet debate. However, before I begin, I must emphasise that this discussion is not about whether or not psi exists, but whether meta-analyses are too subjective to resolve the ganzfeld debate.
Let’s take a close look at the figures Chris Carter provided. (Sections from Carter’s reply are indented with “- - -” )
- - - Table 1: Standard Ganzfeld Replications 1991-2003
- - - PRL, Princeton, NJ, 354 trials, 34%
Actually, this work was finished in 1989 and first published in 1990, so strictly speaking it shouldn’t be on this list.
- - - University of Amsterdam, Netherlands, 76 trials, 38%
Two points to make here: Firstly, overall the figures from Amsterdam/Utrecht did not achieve significance. (N=477, Hits=135, 28%, z-score:1.60). Secondly, there is no single experiment with 76 trials carried out at Amsterdam. He’s combined two experiments: series 3 and 4a, ignoring other data from the same period.
- - - University of Edinburgh, Scotland, 97 trials, 33%
This is Morris, Delanoy, Watt (1995)
- - - Institute for Parapsychology, NC, 100 trials, 33 percent
As I mentioned in the “Kennedy vs. Carter” comments section, these are actually the early results from the Autoganzfeld II experiment. Once the experiment was complete, it scored at chance (27%). I notice he referenced the completed Autoganzfeld II paper in his list of references but doesn’t not seem to have included the updated results.
- - - University of Edinburgh, Scotland, 151 trials, 27%
I have to admit, I’m scratching my head over this one. There is no single experiment from Edinburgh with 151 trials and I can find no way of adding smaller experiments together that gives a result of 27% with 151 trials.
- - - University of Amsterdam, Netherlands, 64 trials, 30%
This figure is actually two ganzfeld experiments with completely dissimilar hypotheses and methods that Carter’s stuck together (Series 4b and the Eigensender experiment). I do not understand the reasons for doing so.
- - - University of Edinburgh, Scotland, 128 trials, 47%
This is Dalton’s 1997 paper.
- - - University of Gothenburg, Sweden, 150 trials, 36%
The first five series from Parker.
- - - University of Gothenburg, Sweden, 74 trials, 32%
Bearing in mind that Carter’s believes the ganzfeld meta-analysis debate is not about data trimming or inclusion cutoffs, it is worth noting that after 2003 the results from Gothenburg took a considerable dip, such that the post-2000 work (including the data quoted above) is not statistically significant.
In other words, out of the institutions the Carter has listed in his table, two did not return significant results (Institute of Parapsychology and Amsterdam/Utrecht), another was only marginally significant (Gothenburg, 468 trials, 136 hits, 29%, z=1.95) and a fourth relied solely on the results from one experimenter: Koestler (with Dalton: 562 trials, 187 hits, 33%, z=4.35, without Dalton: 382 trials, 107 hits, 28%, z=1.29).
So you see, the debate over the ganzfeld results aren’t as cut and dried as Carter would have us believe. And all of that is using the statistical method suggested by Carter of pooling the data together.
Clearly, the ganzfeld database is large enough and flexible enough that the “signal in the noise” that appears throughout the ganzfeld data (and there certainly seems to be something there) can be manipulated.
- - - Meta-analysis is essentially combining many smaller studies into one larger study in order to exploit the greater statistical power of larger sample sizes to detect effects with statistical significance. As such, it is only useful when all of the studies are of the same nature. This is indeed standard practice, and common sense.
Of course, it is wise to set inclusion criteria that accounts for methodological differences, but only M&W’s meta-analysis has had this criteria placed upon it. None of the other ganzfeld meta-analyses have been criticised in the same way, even though they all use broadly similar methods.
- - - It attains significance even without the Dalton experiment included.
Yes, it achieves significance at z=1.9 or odds of about 1 in 36. And, once again, I reiterate that because M&W chose to use the primary scores of each experiment, the option of pooling hits wasn’t available since three experiments used z-scores as their scoring method.
There is nothing intrinsically “wrong” in choosing to use each experiments’ primary measure, especially considering that the Joint Communique by Hyman and Honorton included a section about multiple analysis.
- - - According to Honorton: "I calculated the exact binomial probability for each study and obtained its associated Z score." This means his z scores were indeed weighted by sample size, so no, Milton & Wiseman did not use the same method that Honorton did.
Yes he did. He used an unweighted z-score for the overall effect size. That quote is telling us he calculated a z-score for each individual experiment. The quote where Honorton describes his methods for calculating the overall z-score is:
“A composite Z score was computed by the Stouffer method recommended by Rosenthal (1978). This involves dividing the sum of the Z scores for the individual studies by the square root of the number of studies.” (JoP, vol 49, p59)
This is the same method that Milton and Wiseman used:
“The cumulated probability of all the studies, calculated (as specified in advance) by the Stouffer method” (Psychological Bulletin, Vol 124, No 4, p388)
- - - And by the way, the exact binomial test (which uses probabilities and combinations of possible chance results and so explicitly takes into account sample size) most certainly is considered a standard technique.
As a non-statistician, I can only rely on authority rather than my own expertise, but I would say that I have found very little in support for the pooling-data method in books describing meta-analysis techniques. For example, in the book “Research Methods and Data Analysis in Psychology” by Darren Langdridge (Pearson Education, 2004), on page 244 there is a flow chart to help you decided which is the correct statistical measure to use for different kinds of data. None of the twelve statistical methods it suggests are to simply pool the data together.
- - - No statistician would ever argue that sample size should not be considered in performing statistical tests.
And neither am I.
References:
Bierman, D. J. (1995) "The Amsterdam ganzfeld series III & IV: Target clip emotionality, effect sizes and openness", The Parapsychological Association 38th Annual Convention: Proceedings of Presented Papers, Durham, NC: Parapsychological Association, pp.27-37
Bierman, Bosga, Gerding, Wezelman, (1993) “Anomalous information access in the Ganzfeld: Utrecht - Novice series I and II”, Proceedings of the 36th Parapsychological Association Convention
Broughton, R.S., & Alexander, C.H. 1997. "AutoGanzfeld II: an attempted replication of the PRL Ganzfeld research." Journal of Parapsychology, 61, 209-226.
Dalton, K., (1997). “Exploring the Links: Creativity and Psi in the Ganzfeld.” Proceedings of Presented Papers: the Parapsychological Association 40th Annual Convention, pp. 119-134.
Honorton, C., (1985) "Meta-Analysis of Psi Ganzfeld Resarch: A Response to Hyman", Journal of Parapsychology 49, pp 51-91
Honorton, C., Berger, R. E., Varvoglis, M. P., Quant, M., Derr, P., Schechter, E. I., & Ferrari, D. C. (1990). “Psi communication in the ganzfeld: Experiments with an automated testing system and a comparison with a meta-analysis of earlier studies,” Journal of Parapsychology, 54, 99-139.
Hyman, R., and Honorton, C. (1986). “A joint communiqué: the psi ganzfeld controversy,” Journal of Parapsychology, 50. pp. 351-364.
Langdridge, D. (2004) “Research Methods and Data Analysis in Psychology,” Pearson Education
Milton, J. & Wiseman, R. (1999). “Does psi exist? Lack of replication of an anomalous process of information transfer,” Psychological Bulletin, 125(4), 387-391
Morris, R, Kathy D, Deborah D and Caroline W, (1995) “Comparison of the sender/no sender condition in the Ganzfeld.” Proceedings of the 38th Annual Parapsychological Association Convention, pages 244-259.
Parker, A., (2000) "A review of the Ganzfeld work at Gothenburg University." Journal of the Society for Psychical Research, 64, pp. 1-15
Parker, A. (2010) “A ganzfeld study using identical twins,” Journal of the Society for Psychical Research, 73, (899), 118-126 Proceedings of the International Conference of Psychical Research/Proceedings of the Parapsychological Association 2008, Winchester, UK.
Parker, A.,Sjödén, B. (2010) The effect of priming of film clips prior to ganzfeld mentation. European Journal of Parapsychology, 25, 76-88.
Parker, A., Frederiksen, A., Johansson, H. (1997). “Towards specifying the recipe for success with the ganzfeld: Replication of the ganzfeld findings using a manual ganzfeld with subjects reporting paranormal experiences,” European Journal of Parapsychology, 13, 15-27.
Westerlund, J., Parker, A., Dalkvist, J, Goulding, A (2004) “Remarkable Correspondences Between Ganzfeld Mentation And Target Content - Psi Or A Cognitive Illusion?” Proceedings of the Parapsychological Association Convention 2004, pp 255-267
Wezelman, R., & Bierman, D. J. (1997). “Process oriented ganzfeld research in Amsterdam: Series IV B (1995): Emotionality of target material, Series V (1996) and Series VI (1997): Judging procedure and altered states of consciousness. In Proceedings of the 40th Annual Convention of the Parapsychological Association , Durham, NC: Parapsychological Association, pp. 477-491
Wezelman, R., Gerding, J. L. F., & Verhoeven, I. (1997). Eigensender ganzfeld psi: An experiment in practical philosophy. European Journal of Parapsychology, 13, 28–39.
Wright, T., & Parker, A., 2003. “An Attempt to Improve ESP Scores using the Real Time Digital Ganzfeld Technique.” European Journal of Parapsychology, 18, 69-75.
Posted by: ersby | February 06, 2014 at 08:14 AM
There are various technical statistical issues related to meta-analyses and Chris’s comments that are probably beyond the interest and expertise of most readers of this blog. I published a recent paper “Can Parapsychology Move Beyond the Controversies of Retrospective Meta-Analysis” (http://jeksite.org/psi/jp13a.pdf) that discusses these issues and I will let interested readers look there. The introduction and early part of that paper may be of interest and accessible to readers who are not knowledgeable of statistics. Here is one quote from the introduction. It is from a paper by a couple of psychologists talking about methodological issues in psychological research in general. It was not dealing specifically with parapsychology. They said:
“[W]e have seldom seen a meta-analysis resolve a controversial debate in a field. ... [W]e observe that the notion that meta-analyses are arbiters of data-driven debates does not appear to hold true. ... [M]eta-analyses may be used in such debates to essentially confound the process of replication and falsification. ... [F]ocusing on the average effect size may be used to, in effect, brush the issue of failed replication under the theoretical rug ... .
I believe that there is an increasing awareness that this observation is true. For the past 25 years, the debates about psi phenomena have focused on meta-analysis. During this same time, the scientific interest, activity, and support for experimental parapsychology have steadily declined. At this point, one option is to hunker down and try to argue more forcefully for meta-analysis. As discussed on another posting, another option is to admit that there are obstacles to reliable psi effects that are currently not understood and try different research strategies.
Another option is to tell people who have different perspectives that their views are “nonsense” and their knowledge is “not even sophomoric.” I’ll leave it to readers to decide how helpful that strategy is.
One statistical point is worth clarification. Chris is correct that the Milton & Wiseman meta-analysis used an analysis method (Stouffer’s Z) that does not consider sample size and is not an optimal method. This is discussed in my paper above. However, it should be noted that this method has been widely used in parapsychology, and Milton & Wiseman were just doing what has been common practice. For example, it was used in Honorton’s ganzfeld meta-analysis, in the subsequent ganzfeld meta-analysis by Storm et al., in various other meta-analyses by Storm, in the meta-analysis of standard studies by Bem et al., and in some of the meta-analyses by Radin.
Given that Chris feels so strongly about this methodological point and that he apparently by oversight forgot to point out these other cases in his book, I expect that he will want to contact these researchers and tell them that they botched their analysis, their findings are nonsense, and their knowledge of methodology is not even sophomoric because they used the same method as Milton & Wiseman.
Chris tries to present parapsychology in black and white terms. But parapsychology actually has much gray—both for the methodology that has been used (discussed in the paper above) and the results obtained (discussed in paper above and in another posting). These gray areas need to be acknowledged and dealt with if parapsychology is to make progress and become more accepted. That was the basic point in my review.
Posted by: Jim E. Kennedy | February 06, 2014 at 09:04 AM
J.E.K, The link to your paper on meta-analysis does not work. I get error 404 - file not found.
I have tried to soften and qualify my opinion of the statistical argument presented by Chris Carter. As I have said several times,I do agree that the pooled methodology he uses is the best assuming like experiments. And, that being said, I have also been critical - gently - of his pronouncements concerning the stouffer method used by M&W. There is a way to weight the stouffer method such that one assigns a weight to the Z scores (even though the Z scores are already "weighted" to some degree by being calculated based on the n of the individual experiment). Working in an actuarial dept of a large insurance company we sometimes use a stouffer's to assess results of studies of emerging technologies and pharmaceuticals. Nobody ever told us we were doing it wrong.
Carter says that M&W did not use the same approach as early analysis by Honorton, but it is the same approach - unless I am missing something.
Carter's insistence on a wrong presentation of the statistical fact makes me feel less comfortable with other points he makes, like those concerning inclusion criteria. Now ersby has produced some seemingly interesting and viable counters to those points which I do not have time to read. ersby, being a skeptic, is to be questioned, but, again, Carter has opened the door for him.
Confusing and frustrating.
Posted by: no one | February 06, 2014 at 03:39 PM
Wow -- it's a pleasure to watch two heavyweights debate this matter, even though (like Michael and many others, I'm sure) I don't understand a great deal of what's being said.
Thank you, Jim and Chris, for your contributions here.
I have to say, I find this particularly provocative:
"Given that Chris feels so strongly about this methodological point and that he apparently by oversight forgot to point out these other cases in his book, I expect that he will want to contact these researchers and tell them that they botched their analysis, their findings are nonsense, and their knowledge of methodology is not even sophomoric because they used the same method as Milton & Wiseman."
I'm looking forward to Chris's response!
The conversation reminds me, once again, of how grateful I am that the universe has supplied me with enough paranormal and spiritual experiences of my own to allow me to reach my own conclusions.
Posted by: Bruce Siegel | February 06, 2014 at 04:13 PM
"The link to your paper on meta-analysis does not work."
There was a minor HTML formatting error. I fixed it. It works now.
Posted by: Michael Prescott | February 06, 2014 at 05:22 PM
"There was a minor HTML formatting error. I fixed it. It works now."
Thanks. Read it, liked it.
+1 to Kennedy. Really adds force to his earlier comments here regarding the state of paranormal research. As far as I am concerned, I have to say that Kennedy is making the most convincing argument.
As an aside, a link with discussion on the weighted version of stouffer's that I like:
http://statgen.ncsu.edu/zaykin/some/Optimally_weighted_Z-test.pdf
Posted by: no one | February 06, 2014 at 09:01 PM
I wonder if Ulms (or statistically knowledgeable parapsychologists) could help out in Carter's defense here. They should be invited to weigh in, maybe. I hope some of the objects to CC's analysis can be put to rest. It would be nice to at least clarify the issues.
Posted by: Roger Knights | February 07, 2014 at 05:33 AM
Oops--change "objects" to "objections"
Posted by: Roger Knights | February 07, 2014 at 05:34 AM
Yes, this is very edifying. I hope the parties continue to debate here.
Posted by: Matt Rouge | February 07, 2014 at 06:42 AM
I've always been interested in these brawls over the meaning of statistical methods for what they show about whether or not the complaints are really held to be valid. Especially if they apply their same standards to other fields, especially if psychologists reject their colleagues' use of those standards they reject in this field. It's like the old "extraordinary claims" canard, if statistical methods are not valid in the study of parapsychology, they can't be valid in the study of psychology. Psychologists don't seem to realize that with the enormous amount of past psychological orthodoxy, considered to be science at its time but which turned out to be invalid, makes the everyday claims of their field worthy of the highest skepticism. And much of what psychologists hold to be science deals with far more complex matters than parapsychologists test in lab experiments and are far less prone to accurate observation and mathematical analysis. Not to mention the clearly materialistic motives being explicitly and unreservedly part of the package they are selling as objective science.
When I put quotes around "skeptic" I mean for it to debunk the pretenses of the pseudo-skeptics. I don't intend to be polite about it, but honest.
Posted by: Anthony McCarthy | February 07, 2014 at 09:27 AM
Great thread! I've often referred to dr Kennedy's papers on the skeptiko forum. On my ipad so will be brief: re pooling in meta analysis, the Cochrane group (who set the gold standard for such studies as I understand it) specifically state that pooling is not a valid tool.
I'm on my ipad now but will post links later.
(This is the real Arouet Michael, not my "fan". I'm happy to help verify if you need me to!)
Posted by: Arouet | February 07, 2014 at 01:57 PM
Anthony McCarthy,
Ding ding ding ding! GREAT comment.
Posted by: Matt Rouge | February 07, 2014 at 06:34 PM
Posted by: Roger Knights | February 07, 2014 at 08:28 PM
I will be interested to see how Chris clears up some of the apparent mistakes he's made.
It's not good to make mistakes, but my guess is that he'd end up with a pretty similar table with pretty similar stats to work with. Couldn't JEK correct the table and show us what the stats look like? I bet they'd still be impressive and statistically significant (not all of them, perhaps, but many of them).
I'll reiterate the point I made in the comments of another post: How are proponents supposed to prove that psi exists?
1. Single studies are always dismissed by skeptics.
2. Replications are never good enough. (By the way, the "need" for replication is really nothing more than a part of the sociology of science. On an epistemological level, having a perfect replication doesn't necessarily prove something true, and lacking one doesn't necessarily make a claim dismissible.) It certainly sounds to *me* as though Ganzfeld studies have been replicated, but skeptics will always shriek, "It hasn't been replicated. Hahaha, it's not truueeuue!"
3. And we now "see" that meta-analysis is not to be accepted either.
So, permanent win for skeptics. The Atheist Empire is secure for all time. Luke will not be taking out the Death Star with his pesky studies!
But no. It strikes me as an epistemological no-no to be able to inoculate one's position against disproof. It's a sure sign that something is fishy in the state of Denmark.
Enough mixed metaphors! I think my point should be clear.
Posted by: Matt Rouge | February 07, 2014 at 09:08 PM
I am no statistical master, but I did a lot of quantitative analysis and statistics work at business school at Purdue, so I can feel my way through a bit.
I find it difficult to understand why pooling the Ganzfeld data and treating them as iterations in one big study would not be allowed.
Certainly, the protocols can differ to some extent. Nevertheless, even trials within the *same study* are an arbitrary concatenation that just exists in our mind. Ultimately, each trial is a discrete event (albeit with varying degrees of connection to other trials in the study: e.g., the same person sitting there doing multiple trials in a row creates a more connected series of events, presumably, that 10 different people doing 1 trial apiece).
Thus if discrete events in a single study can be added up and analyzed, then I don't see why discrete events in multiple studies can't be added up and analyzed if they are comparable.
Posted by: Matt Rouge | February 07, 2014 at 09:16 PM
"I find it difficult to understand why pooling the Ganzfeld data and treating them as iterations in one big study would not be allowed."
I do too. I see no reason why, if the experimental design was the same, the results couldn't be pooled as Carter wants to. Carter does misrepresent what was done, but that is another issue.
I can definitely see why you wouldn't pool results that way if you're looking at drug trials. However, the psi - yes or no/hit or miss - situation seems to me very simple and to need not be subject to all of the considerations that have been suggested here.
Posted by: no one | February 08, 2014 at 01:01 AM
Here is an interesting article on problems with using statistics in the scientific method:
http://nautil.us/issue/4/the-unlikely/sciences-significant-stats-problem
I really see the issue with respect to psi as sociological: when psi was first recognized as a "phenomenon" to be studied in the 19th century, Christianity was strong and atheism was on the rise. It was caught in the pincers. The monumental atheistic influence of Freud I think killed any hope of general acceptance of the phenomenon as real in the 20th century. Psychologists were, among scientists, in the best position to notice the phenomenon in action, but they are among the most atheist-leaning of scientists (no doubt because of Freud's influence).
The trouble with "proving" psi is that opponents will simply not recognize proof in the form of "big hits" in the field. Didn't you know the plural of "anecdote" is not data?!
If we try to capture big hits in an experimental setting, as in remote viewing experiments, opponents will simply refuse to see that the subjects achieved any hits and deny the positive interpretations of those scoring the results.
Then, however, when we turn to the Ganzfeld studies and other experiments in which the number of hits cannot be denied (except through criticizing the protocols, which opponents will do insofar as they are able) but whose success is based on statistical analysis, then we will get into recondite arguments about... statistical analysis.
Meanwhile, those of us who have experienced psi on the big hit or everyday level ("I knew it was her when the phone rang"), know that this stuff is real. Despite Christian ideology which would rather not deal with psi and its implications, most people in the US have experienced something and believe to some degree (I believe that polls results I have read from time to time bear this out).
The result is a kind of national and even worldwide cognitive dissonance about the issue: the masses and even a portion of the scientific elite (particle physicists seem to have no problem with psi or the paranormal overall) get it, but for some reason we are not allowing ourselves just to recognize the fact of psi as a species and integrate it into our thinking.
To me, if anything, the Ganzfeld results I have read about *exceed* my expectations. I would not guess that ordinary people would get hit rates of, say, 33% in a study. We should also keep in mind that, in the course of the studies, from time to time there have been people getting close to 100% hit rates so that, if anything, the totaled data hide some individual performance levels that *ought* to be impossible (IOW, you might have a series of trials for an individual that is 1,000,000 to 1 against chance, but then regression to the mean across the study reduces the effect to mere "statistical significance" for the entire group).
Perception and biases count for everything when such regression to the mean is at work. It gives skeptics the opportunity to deny anything they want. To take another example, skeptic-dominated Wikipedia demonstrates a huge bias against "alternative medicine." When you read about any kind of herbal therapy or acupuncture or anything like that, as much as possible studies are cited along the lines of "no effect was demonstrated." I can never take any such claim in Wikipedia at face value, since the bias is so egregious. These guys just don't want that New Age-y stuff to be true! (To be fair, certain herbs are simply known at this point to have proven medicinal effects, and there is no attempt in Wikipedia to deny that. Further, the lore of herbs is typically not expunged from articles. But there is a very clear tendency to try to "debunk" alternative medicine as much as possible.)
Its analogous with psi situation in that there is a regression to the mean by which not every therapy is going to work for every person, so the effect can be very dramatic for individuals but not necessarily the average person. At the same time, the average person will likely experience *some* type of alternative medicine that works for him or her if s/he tries enough things. (This is something that any Native American medicine wo/man would just know instinctively, don't you think, no one?)
For example, I had really nasty tonsils that were swollen all the time. I happened to run across information about the herb cleavers that they could help swollen lymph nodes. I made a tea and BOOM! Gone. Not just helped a little--it was an instant cure. When the swelling eventually came back, more tea, cured instantly again. But how many doctors in the US are going to recommend this incredibly safe and (potentially) effective treatment when their patients complain of swollen tonsils or other lymph nodes? That number is going to be close to zero.
But if we ran a study on cleavers' effect on swollen tonsils, it may very well turn out that it is only beneficial to a small number of people effected. Regression to the mean could make it *appear* that the herb is useless. Yet, if that is true, the medicine wo/man would know to try it and go onto something else if it didn't work. It is only the Western "scientist" would conclude from his or her study that the herb "demonstrates no effect." Meanwhile, all those drugs from Big Pharma? Of course those work! The studies *prove* it!
And so it is with psi. The masses, despite their Christian bias, understand that its real. Studies like the Ganzfeld show an effect. But the Christian-atheist "coalition government" doesn't want to recognize it, so they won't.
Posted by: Matt Rouge | February 08, 2014 at 08:20 AM
Matt Rouge: "from time to time there have been people getting close to 100% hit rates so that, if anything, the totaled data hide some individual performance levels that *ought* to be impossible (IOW, you might have a series of trials for an individual that is 1,000,000 to 1 against chance, but then regression to the mean across the study reduces the effect to mere "statistical significance" for the entire group)."
Really good point re; regression to the mean and excellent use of the white crow argument.
" At the same time, the average person will likely experience *some* type of alternative medicine that works for him or her if s/he tries enough things. (This is something that any Native American medicine wo/man would just know instinctively, don't you think, no one?)"
I would absolutely agree.
Conversely, I have talked to several people who have gotten the flu shot this year, yet have contracted the flu. What should I conclude from that?
Materialists rarely apply the same standards to their materialist theories that they apply to so called "extraordinary claims", alternative medicine, psi, etc..
Posted by: no one | February 08, 2014 at 03:18 PM
Re: the question of pooling results. Anyone who is interested in this field should take a look at the Cochrane Collaboration website (http://www.cochrane.org/) and in particular their handbook: (http://www.cochrane.org/training/cochrane-handbook)
This group of medical professionals over 20 years have been developing the gold standard for meta-studies.
Here's a page explaining why they don't recommend pooling studies: http://www.cochrane-net.org/openlearning/html/mod12-2.htm.
It's pretty readable even for a layperson (such as myself). Chapter 8 of the handbook on methodological bias is also a really good read. Their strong focus is on methods that reduce the risk of methodological bias.
I think anyone interested in parapsychology should spend a little bit of time on their site!
Posted by: Arouet | February 08, 2014 at 06:59 PM
Here's a question for anyone who can answer it.
It seems that the highest rates of success in the Ganzfeld studies are achieved by artists, musicians, and the like. So why aren't more tests carried out using exclusively these subjects?
Since there's still so much debate (at least in some circles) about the reality of psi, why not raise experimental success rates by testing only those most likely to succeed?
Posted by: Bruce Siegel | February 08, 2014 at 07:40 PM
I have submitted a paper for publication that has some discussion that is very relevant to some of the comments that have been made here. The primary topic of the paper is Bayesian statistics, but don’t let that discourage you because the relevant sections discuss science much more generally. The full paper is at http://jeksite.org/psi/bayesian.pdf
The relevant sections are below.
“Both classical and Bayesian analyses assume that experimental research is self-correcting and will eventually produce valid, compelling conclusions. Biased results for an experiment will be rectified by other experiments that are unbiased.
This idealistic philosophical hope does not consider the evidence that sustained levels of methodological noise and bias can occur in academic research—and particularly in psychology and parapsychology (Kennedy, 2013a, 2013b; Pashler & Wagonmakers, 2012). The typical research practices in recent years in psychology and parapsychology have been exploratory and provide many opportunities for biases and misconduct. The amount of methodological bias and misconduct that actually occurs cannot be reasonably estimated. Undetected cases are likely. These exploratory research practices cannot be expected to resolve a controversy such as the occurrence of psi—and particularly when the findings consistently differ among experimenters (Kennedy, 2013b).
Confirmation
Well-designed confirmatory experiments are required to make experimental research self-correcting and to provide convincing, valid conclusions (Kennedy, 2013a, 2013b; Pashler & Wagonmakers, 2012, Wagonmakers, et. al., 2012). Confirmatory methodology is well established for regulated medical research, but has not been part of the research culture for psychology and parapsychology. Key components include pre-specification of statistical methods and acceptance criteria, power analysis, public prospective registration of experiments, experimental procedures that make intentional or unintentional data alterations by one person difficult, documented validation of software, and sharing data for analyses by others (Kennedy, 2013a, 2013b, 2013c).
Skeptics have suggested that psi experiments are actually a control group that provides empirical evidence for the magnitude of methodological bias that occurs with current exploratory practices. That hypothesis is applicable to most experimental research in psychology as well, and remains plausible until confirmatory methodology is implemented, or until there is convincing evidence that does not rely on experiments. The most convincing evidence would come from the development of practical applications.
Application
Practical applications have been a major factor in the widespread acceptance of quantum physics. The phenomena claimed for quantum physics are inconsistent with daily life, as are the claims for psi experiments. In addition, most physicists now openly admit that they have no convincing conceptual theory or understanding of how quantum effects occur. However, unlike psi, the reality of quantum phenomena is widely accepted. One major factor is that quantum physics is the basis for transistors and lasers. Those technologies establish that quantum phenomena are real, and they have become part of our daily lives. I would not expect the claims for quantum physics to be widely accepted if most experiments did not obtain evidence for the effects and if useful practical applications could not be developed—as has been the case with psi research.
Applications of psi can be expected to be developed very quickly after the discovery of methods that demonstrate even minimally reliable effects. Enough effort and money have been invested in psi research that if the results were convincingly valid and reliable, practical applications would be developed (Kennedy, 2003). These applications would be conspicuous in daily life. From this perspective, the absence of useful applications is evidence for the absence of convincing psi effects.
…
My impression is that many scientists implicitly look to application rather than experiments when evaluating the evidence for psi. However, the rationale for that strategy is rarely discussed openly because it raises doubts about the integrity of scientific research far beyond parapsychology.”
Posted by: Jim E. Kennedy | February 08, 2014 at 08:12 PM
Jim,
||Practical applications have been a major factor in the widespread acceptance of quantum physics.
...
One major factor is that quantum physics is the basis for transistors and lasers. Those technologies establish that quantum phenomena are real, and they have become part of our daily lives.||
I think this reasoning is weak. Both transistors and lasers were invented before QM was widely accepted or understood. You make it sound as though QM principles were the clues that allowed those things to be invented.
I was reading the "Dancing Wu Li Masters" in high school in the 80s, and it all seemed pretty accepted back then--but it was based on things like the double slit experiment. Products and applications that confirmed QM were never mentioned.
QM may help us make post hoc sense of lasers and transistors after the fact, but has it really resulted in lots of new and useful applications? I'm not saying that there are none, and I of course believe in QM. I just think you have the sociology wrong (I think most people are vaguely aware of QM at best, and they don't know of any QM applications at all).
But that's not what's really weak about your reasoning here:
||The phenomena claimed for quantum physics are inconsistent with daily life, as are the claims for psi experiments. In addition, most physicists now openly admit that they have no convincing conceptual theory or understanding of how quantum effects occur. However, unlike psi, the reality of quantum phenomena is widely accepted.||
Duh, it's the ideology! QM, however strange it was, didn't violate people's worldviews to the extent that psi does. And, by the way, one *does* find "skeptics" regularly dismissing the implications of QM (e.g., probabilistic vs. deterministic reality), even though they would never deny QM itself.
||Applications of psi can be expected to be developed very quickly after the discovery of methods that demonstrate even minimally reliable effects. Enough effort and money have been invested in psi research that if the results were convincingly valid and reliable, practical applications would be developed (Kennedy, 2003). These applications would be conspicuous in daily life. From this perspective, the absence of useful applications is evidence for the absence of convincing psi effects.||
Wait a sec, are you now admitting that you really don't believe in psi at all?
Your claim here makes no sense at all. QM experiments and applications involve making physical systems that can behave the same way over time. Psi happens in your head.
By way of analogy, if you looked at everyday people's songwriting ability, you might conclude that songwriting ability doesn't exist. If you looked at the percentage of Irving Berlin's songs that are good (he wrote an inordinate number), you might also conclude that songwriting ability doesn't exist, and "White Christmas" was just a fluke, a lucky guess.
But the "practical application" of songwriting ability, something that is manifested by the mind, is that occasionally a really great song gets written. No, it's not a reliable effect. That doesn't mean it doesn't exist.
As I said in another thread, the "practical application" of psi is the provision of advise. My friends and I give it to each other, and people regularly pay for it.
||My impression is that many scientists implicitly look to application rather than experiments when evaluating the evidence for psi. However, the rationale for that strategy is rarely discussed openly because it raises doubts about the integrity of scientific research far beyond parapsychology.||
Here, you are correct! If scientific elite were to apply the same standard to psychology, sociology, etc., that they apply to psi, then they would toss all of it out. In some cases rightly, in some cases, not.
Posted by: Matt Rouge | February 09, 2014 at 01:21 AM
"Applications of psi can be expected to be developed very quickly after the discovery of methods that demonstrate even minimally reliable effects. Enough effort and money have been invested in psi research that if the results were convincingly valid and reliable, practical applications would be developed (Kennedy, 2003). These applications would be conspicuous in daily life. From this perspective, the absence of useful applications is evidence for the absence of convincing psi effects."
I'm really going to have to call you out on this. There is a long list of major scientific fields with no practical applications, among them particle physics, astrophysics, cosmology, and evolutionary biology. Has anyone ever talked about practical applications of the top quark? No. But that doesn't stop people from believing that it exists.
And are minimally reliable effects enough for practical applications of psi? If you're thinking of making money in casinos or the stock market then no. In casinos, the house take obliterates a weak psi effect. Ditto the stock market and brokerage fees. And in the real world, the other players in a casino/stock market will also be using whatever intrinsic unconscious psi ability they have plus all sorts of other highly developed strategies, making a strait-forward application of a weak effect a very messy and daunting proposition. This cannot be captured in a post-hoc computer simulation.
Posted by: Stephen Baumgart | February 10, 2014 at 12:02 AM
I'm with Jim Kennedy here. In my view QM is very highly counter-intuitive, even more so than many populist accounts would have us believe. In particular, quantum non-locality cuts right across any common sense (or for that matter day to day scientific) take on the world. As Neils Bhor put it: "if your are not shocked by QM you haven't understood it."
Yes, as Mat says there are interpretations of QM out there that purport to deflate at least some of weirdness - but to my mind none of them do this in any way that is remotely successful.
I do agree it is too simplistic to say that - in view of this weirdness - QM gained the traction it did solely as result of practical applications. After all, QM was developed when it was because of some basic, pressing problems in physics where the classical approach simply had broken down.
But I'm sure practical applications helped as there is resistance within science to applying QM to a broader range of problems. My understanding, for example, is that Quantum biology and QM approaches to cognitive science are making progress; but there is certainly resistance.
In that regard some may recall the outraged response of many scientists and materialist philosophers (including the recently mentioned here Patricia Churchland) when Roger Penrose suggested in his 1989 book "Emperor's New Mind" that QM may play a physical role in how the brain produces consciousness. Whatever the merits of Penrose's speculative approach, it was based on using physics; yet was attacked by those who themselves claimed that physical science could explain the mind! Obviously, they couldn't stomach any sort of scientific explanation that used the 'wrong sort of science' in their eyes i.e. QM.
Finally, there must be lots of practical applications whose development was driven solely by QM. One that immediately comes to mind is quantum encryption which, I think, was developed in the early 90s.
Posted by: Simon Oakes | February 10, 2014 at 11:11 AM
If he could televise (initially on YouTube?) one of his expeditions it would be interesting. What I'd like to see ultimately would be an expedition televised live -- ideally with Skeptics betting against him.
But Joe Gallenberger claims he's been successful using psi, in combination with a group, in winning at Vegas casinos for many years. His inexpensive book,Posted by: Roger Knights | February 11, 2014 at 09:56 AM
PS: I should have added that Gallenberger and his group were sometimes able to get the casino to reserve a table just for them (or even to set it up in a private hotel room), to reduce negative vibes from others.
Posted by: Roger Knights | February 11, 2014 at 04:42 PM
Based on the comments here I have realized that the discussion of quantum physics in my paper quoted above is a distraction and I’m removing that part. The main point of the section on applications is the final paragraph, and even Matt agrees with that. This point can be made more concisely by focusing on psi without bringing quantum physics into it. Also, my undergraduate degree was in engineering physics (applied physics), and the discussion of the role of application probably over generalized the perspectives of applied physics.
Given some of the comments here on quantum physics, it may useful to add a couple of points. In my first quantum mechanics class, the professor started by saying “What we are going to study in this class is basically equivalent to me throwing a tennis ball against this concrete wall and every now and then it goes through. Not only that, we are going to study things that are equivalent to putting someone on the other side of the wall and playing catch through the wall.”
We studied semiconductors in that class and, his initial comment about playing catch was an analogy to what happens with transistors. Quantum physics actually had a series of controversial effects that were resolved over a period of decades. The analogy of tennis balls passing through a wall was an early issue and, as Simon pointed out, non-locality and entanglement took much longer to become accepted. But there is still no widely accepted understanding of how these effects occur.
Matt’s comment that quantum explanations of transistors and lasers were post hoc is not consistent with my understanding from the classes I had in applied physics. I suggest that people who are interested in these topics do quick internet searches for “quantum physics semiconductor” and “quantum physics laser”. You will find that overwhelmingly the sources say that transistors and lasers are based on quantum mechanics. (e.g., http://www.pbs.org/transistor/science/info/qmsemi.html ). Developments in quantum mechanics and semiconductors in the 1930s were the ground work for transistors later, and the laser was developed based on a quantum effect originally described in 1917.
Posted by: Jim E. Kennedy | February 12, 2014 at 10:11 AM
Jim,
Thanks. I'm still hoping you will argue back against the real arguments against your position made in the thread.
||Matt’s comment that quantum explanations of transistors and lasers were post hoc is not consistent with my understanding from the classes I had in applied physics. I suggest that people who are interested in these topics do quick internet searches for “quantum physics semiconductor” and “quantum physics laser”. You will find that overwhelmingly the sources say that transistors and lasers are based on quantum mechanics. (e.g., http://www.pbs.org/transistor/science/info/qmsemi.html ). Developments in quantum mechanics and semiconductors in the 1930s were the ground work for transistors later, and the laser was developed based on a quantum effect originally described in 1917. ||
The Wikipedia article on transistors doesn't even mention quantum mechanics. That in itself doesn't prove anything, but it seems it was invented rather by accident.
You're probably right about lasers, however, as they seem to have been the product of actual research into physics, and the people involved would have understood quantum mechanics.
Another example you could use in making the same incorrect argument is relativity. It blew people's minds but didn't really violate anyone's worldview, atheist or Christian. And it did in fact lead to the development of practical applications (atomic bomb, GPS technology, etc.).
So yes: if you can create a machine that does the same thing again and again reliably, that's a thing that people will use and appreciate (it doesn't really matter whether the theoretical science comes before or after).
Psi and many other things of which we make mental use are not externalized in the form of physical technology.
Though you bring erudition and many good points to the table, I find your position frustrating. The main reason is this: society *ought* to recognize that the paranormal has been proven by now. The white crow has been found. Atheist-materialism *has* in fact been defeated.
Regardless of your specific points, your mode of argument would seem to contribute to the hand-waving that lets atheists get away with a divide-and-conquer strategy that would be ludicrous in any other sociological context. You reference personal paranormal experiences that skeptics will *easily* dismiss as "mere anecdote," especially since you provide no details, but then you attack with seeming gusto the evidence for the paranormal that does exist.
Frankly, it comes across to me as advanced concern-trolling. "Oh, yeah, I'm one of you; I've had paranormal experiences. But none of the evidence for any of that is any good!" Moreover, while you seem convinced of your own experience, you seem rather arrogantly dismissive of others' as "wishful thinking" and so on. The impression made on me has not been a positive one.
If you think the paranormal is real, then why not make that your foundation, instead of something you mention in passing while trying to tear down the edifice? (The tearing down is not very convincing to me, either. Carter's mistakes are fair game, of course. But you have yet to convince me that the overall hit rate of the chart he provided would look much different, even adjusted for his alleged errors.)
Posted by: Matt Rouge | February 12, 2014 at 04:33 PM
"Moreover, while you seem convinced of your own experience, you seem rather arrogantly dismissive of others' as "wishful thinking" and so on."
Let's give Jim a chance to clarify his position in this regard.
So Jim, whose experiences or research, besides your own (if anyone's), do you personally find compelling as evidence for psi? Are you impressed by what some NDErs report? Do you take seriously a book like Mental Radio, or Ian Stevenson's research, or Carol Bowman's? How about Leonora Piper and the extensive investigations of her readings?
These are some of the areas and people I myself find persuasive. But perhaps you have your own examples. Matt seems to think you only take your own psychic experiences seriously--is it true?
Posted by: Bruce Siegel | February 12, 2014 at 11:14 PM
Jim, here's another slant to my question. Whose work in the fields of psi or spirituality inspires you? Some personal favorites are Ken Ring, Raymond Moody, Stan Grof, Rupert Sheldrake, Robert Monroe, and the aforementioned Carol Bowman and Upton Sinclair.
What about you?
You seem like an interesting guy and I'd like to know more about you, particularly in regard to where your enthusiasm lies. I'm sure others here feel the same.
I think Matt has been a little quick to question your motives, so please help us to understand you better.
Posted by: Bruce Siegel | February 12, 2014 at 11:53 PM
The binomial method is an absolutely valid method of statistical analysis for Milton and Wiseman's (1999) studies. The contention that it is invalid, according to the Cochrane collaboration index, stems from a misunderstanding of what they are actually saying. It conflates experiments utilizing controls with basic binomial setups such as the Ganzfeld.
In the former, expected success/risk frequencies vary empirically (as controls), whereas in the GZ they are standardized at 25% for all four-choice studies. Binomial analyses simply exclude studies not of four-choice design (none in Milton and Wiseman's MA).
However, if expected frequencies are not standardized, pooling of trials can lead to Simpson's paradox, as illustrated by the Cochrane example.
----------
While I think the exact binomial test would have been a better choice, where I disagree with Carter, generally, is that this debate over the statistical test Milton and Wiseman (1999) applied can tell us much about their studies. As Kennedy pointed out, many parapsychologists used (and still use) the unweighted z-score method, despite its setbacks. My friend Maaneli asked Storm why he put it in his 2010 MA (although he also provided a binomial test), and he replied that it was because it was a standard method in psychology.
Much more interesting than whether the odds of MW's studies were 1/6 or 1/36, I think, is the deep drop in effect sizes reported over the period they covered. The MW ganzfeld studies were simply much less successful than the PRL studies, by any measure.
Bem, Palmer, & Broughton (2001) found strong empirical support for the hypothesis that this was so because MW's studies were non-standard. But Maaneli and I, in a forthcoming paper to be published in the JP, also found that MW's studies used a sample not nearly as selected as the PRL sample (which had almost 100% of its subjects chosen on the basis of at least one psi-conducive trait). We would expect different hit rates among different populations, but Milton and Wiseman simply did not pay attention to which population was being tested. Nevertheless, in their database, 513 trials come from studies which used selected participants, and for these studies the pooled HR is 30.6% (non-significantly different from the PRL total of 32.2%).
Comparing similar populations with similar methodologies leads to similar results; comparing different populations with different methodologies leads to different results. Milton and Wiseman (1999) supports this by both our analysis and Bem, Palmer, and Broughton's.
Posted by: Johann | February 13, 2014 at 11:24 PM
Matt, I don't see Kennedy as an undercover skeptic. One of his papers he linked to even calls for more psi research, but with a different research paradigm. I think he effectively explains why the current paradigm isn't working.
With regards to dismissing other people's psi experiences (80% wishful thinking), here again I find myself agreeing with him, at least directionally. I don't know if it's 80% or 42.71%, but I find myself hearing people out on how "God intervened" and did this or that, or how there's a ghost in their house because of this or that noise they hear at night, or how some ESP happened to them other day and very often it is apparent that what they are saying is based on some sloppy thinking and that there exists some fairly obvious plausible alternative normal explanation.
I mean there are people who simply do not seem to understand that coincidences do happen. To these people *everything* happens for a reason and simple coincidences are used by them as proof of higher powers guiding their lives.
I trust my own psi experiences to be what I believe them to be because I know that I usually do a pretty good job considering alternative plausible normal explanations. As I have become familiar with the capacity for thinking properly of many people on this forum and feel that most everyone here is interested in gaining an understanding of the truth, I also generally trust that if someone says they had a psi experience it probably is exactly that. Unfortunately, a lot of people are intellectually lazy and they aren't all that interested in pursuing a deeper understanding of much of anything.
Posted by: no one | February 14, 2014 at 04:37 AM
Well said Johann. Agreed!
Posted by: no one | February 14, 2014 at 12:17 PM
"Matt, I don't see Kennedy as an undercover skeptic. One of his papers he linked to even calls for more psi research, but with a different research paradigm. I think he effectively explains why the current paradigm isn't working."
No one, I tend to agree with you. But it will be harder for me to continue to feel as I do if Jim makes no effort to answer my question about where his enthusiasm lies. Do you think that's reasonable? I find it hard to relate to people who are *primarily* critical in their approach to whatever field they choose to study.
In other words, the people I enjoy spending time with are not ones you might describe as critics, first and foremost. Unlike the Randi's of the world whose main delight seems to be to *oppose*, my own favorite thinkers are as likely to trust and praise others in their chosen field, as to find fault with them.
Rupert Sheldrake is a good example. His book The Science Delusion is by definition a critical one. But when I look at his whole body of work, I get a sense of someone who delights in the work of friends and colleagues, and who trusts that people in general are good, trustworthy, and worth listening to. (As indicated by his encouraging laymen to do their own experimentation and to observe for themselves.)
Kennedy has shown us his doubt, and done it eloquently. I appreciate that, and admire him for it.
Now I'd like to see the other side of the equation.
Posted by: Bruce Siegel | February 14, 2014 at 03:18 PM
On a theoretical note, dowsing could be tested by comparing the results of hunting for underground pipes using dowsing vs. using one of the electronic pipe locators listed on Amazon. (Prices range from $25 to $5900. I presume the more expensive ones could be rented affordably from a tool rental place.) There'd be no need to construct a hidden network of underground garden hoses and valves. It could be done on any campus virtually on the spur of the moment. I urge academic-based researchers and Skeptics to conduct a few informal experiments forthwith, and then to challenge their opponents in the same institution to a contest.
This could evolve, if good results were obtained, into more rigorous tests. Their results would only be suggestive, not conclusive, but enough of them would be strongly suggestive. And the testing process would be fun and quick, unlike the boring and/or drawn-out procedures needed in other types of testing.
If academics don’t pick up this ball and run with it, I suggest that uncredentialed psi proponents who are (or are associated with) good dowsers post and publicize “challenges” to Skeptics to disprove their ability to find pipes at any location of Skeptics’ choosing. “Duels” as a result of these challenges could be incorporated into one or more episodes of a TV program like Mythbusters, and/or into my hoped-for Psi Show on YouTube.
(Anyone may copy and paste this anywhere--and I hope they will.)
The most prominent practical application of psi, and the one that is most common in the general population, is dowsing. I've read that about 25% of the population can get "hits" using dowsing rods. I've had it happen to me over pipes and under wires. It is widely used by people in the construction industry when digging trenches, in order to avoid cutting pipes and cables. It was used by the Marines in Vietnam to find tunnels. It was successfully used in archaeological digs, as documented by Stephan Schwartz's book,Posted by: Roger Knights | February 15, 2014 at 07:25 AM
Mr Carter: This is OT but are you acquainted with Michael Suddeth ?-
http://michaelsudduth.com/interview-on-postmortem-survival/
(above is his interview on survival from the "Subversive Thinking"--a Christian blog)
He is not a skeptic in the Dawkins mode-he has had numbers of paranormal experiences himself both as child and adult- but (insofar as I can follow his reasoning)he seems to favour the Super-psi hypothesis over survival. At least he claims reason leads him to doubts of survival.
I think that your mind may be better suited to follow his logic than mine. I would be grateful to know your evaluation of his ideas.
Thank you
Posted by: Paul Robson | February 15, 2014 at 08:51 AM
I want to say a few things about Carter's comments and something which has been left out of this discussion. Chris Carter writes:
"Hyman (1996a) wrote: “The case for psychic functioning seems better than it has ever been…. I also have to admit that I do not have a ready explanation for these observed effects. (p. 43)"
Unfortunately this is a quote mine, that has been taken out of context.
The source can be found here "Evaluation of Program on Anomalous Mental Phenomena."
http://www.scientificexploration.org/journal/jse_10_1_hyman.pdf
It's taken out of context because in the following line Hyman wrote "Inexplicable statistical departures from chance, however, are a far cry from compelling evidence for anomalous cognition."
I would also point out that Hyman wrote that in 1996. This is very old, he has since written tonnes of material on the ganzfeld but for some reason Carter chooses to not mention any of it. For example in 2007 Hyman has written:
"Until parapsychologists can provide a positive way to indicate the presence of psi, the different effect sizes that occur in experiments are just as likely to result from many different things rather than one thing called psi. Indeed given the obvious instability and elusiveness of the findings, the best guess might very well be that we are dealing with a variety of Murphy's Law rather than a revolutionary anomaly called psi."
*Ray Hyman. Evaluating Parapsychological Claims in Robert J. Sternberg, Henry L. Roediger, Diane F. Halpern. (2007). Critical Thinking in Psychology.
A few other things. Ray Hyman should be respected for his research into the ganzfeld. Hyman was the first to discover flaws in all of the 42 original ganzfeld experiments regarding sensory leakage such as possibilities of sender's fingerprints on the target, sensory cues from the tapes and non-sound proof rooms. Charles Honorton actually came to agree with Hyman that the original ganzfeld were flawed and not evidence for psi. It's not a simple case of just black and white insulting anything a skeptic does, we should appreciate this cooperation. Both Hyman and Honorton actually cooperated and agreed on things.
There's no dispute that the original ganzfeld were flawed. Even parapsychologists don't cite those anymore. But what has not been discussed in this thread is that the autoganzfeld experiments are also flawed because they did not rule out the possibility of sensory leakage. You can easily find Ray Hyman's comments on this, but Carter has ignored it.
Here is Hyman in 1996 from another article:
"At the very least, the peculiar pattern I identified suggests that we need to require that when targets and decoys are presented to the subjects for judging, they all have been run through the machine the exact same number of times. Otherwise there might be nonparanormal reasons why one of the video clips appears different to the subjects."
Online here: http://www.csicop.org/si/show/evidence_for_psychic_functioning_claims_vs._reality
This is not mentioned by Carter.
Here's what Hyman wrote in more detail:
"Each time a videotape is played its quality can degrade. It is plausible then, that when a frequently used clip is the target for a given session, it may be physically distinguishable from the other three decoy clips that are presented to the subject for judging. Surprisingly, the parapsychological community has not taken this finding seriously. They still include the autoganzfeld series in their meta-analyses and treat it as convincing evidence for the reality of psi."
*Ray Hyman. Evaluating Parapsychological Claims in Robert J. Sternberg, Henry L. Roediger, Diane F. Halpern. (2007). Critical Thinking in Psychology.
Because the possibility of sensory leakage was not ruled out these autoganzfeld experiments are flawed. Richard Wiseman has also discussed possibilities of acoustic leakage in the autoganzfeld, quite a few possibilities actually. I did not see Carter mention this, here's the paper:
*Richard Wiseman, Matthew Smith, Diana Kornbrot. (1996). Assessing possible sender-to-experimenter acoustic leakage in the PRL autoganzfeld. Journal of Parapsychology. Volume 60: 97-128.
In short if there is any possibility of sensory leakage in a psi experiment then the experiment is automatically flawed, this has been the case with the majority of psi experiments (Hansel 1989).
Ray Hyman in the above book I cited from 2007 has actually documented some parapsychologists who openly admit it is impossible to eliminate natural causes from their experiments. If there is a natural explanation it will be preferred as occam's razor will always be the rule.
Unfortunately there's a lot of literature that Carter leaves out. For example Hansel (1989), Marks (2000), Hines (2003) have also discussed issues of sensory leakage in the autoganzfeld/ganzfeld experiments but this literature is usually ignored.
*Terence Hines. (2003). Pseudoscience and the Paranormal.
*David Marks, Richard Kammann. (2000). The Psychology of the Psychic.
*C. E. M. Hansel. (1989). The Search for Psychic Power.
Chris Carter writes "Hyman and the other “skeptics” have lost the Ganzfeld debate."
But this is not true because Carter has not cited any of Hyman's research on the ganzfeld! All Carter has done is quote mine Hyman for half a line. Carter quotes from Hyman from a 1996 paper but cites none of his recent publications since that date.
In another 1996 paper Hyman wrote:
"Subsequent to my response, I have learned about other possible problems with the autoganzfeld experiments. The point of this is to show that it takes time and critical scrutiny to realize that what at first seems like an airtight series of experiments has a variety of possible weaknesses. I concluded, and do so even more strongly now, that the autoganzfeld experiments constitute neither a successful replication of the original ganzfeld experiments nor a sufficient body of data to conclude that ESP has finally been demonstrated. This new set of experiments needs independent replication with tighter controls."
*The Evidence for Psychic Functioning: Claims vs. Reality (1996).
Yet Carter doesn't mention any of this. Sorry but skeptic's have not lost this debate.
Both the ganzfeld and autoganzfeld experiments are flawed because they did not rule out the possibilities of sensory leakage. So Carter's chart is mostly wrong. This is not evidence for psi. Hyman as late as 2013 has still be criticising the ganzfeld experiments.
His last paper in 2010 can be found here:
*Meta-Analysis That Conceals More Than It Reveals: Comment on Storm et al. (2010)
Online: http://drsmorey.org/bibtex/upload/Hyman:2010.pdf
He writes that
"This reliance on meta-analysis as the sole basis for justifying the claim that an anomaly exists and that the evidence for it is consistent and replicable is fallacious."
Basically proponents of the ganzfeld are hiding behind meta-analysis, they have realised they can't demonstrate psi on demand or in the lab, so now anything slightly above "chance" is considered to be evidence for psi. But this is a well known fallacy. It's called the psi assumption. There are a number of natural explanations that could explain why the data is slightly above chance.
Here's what Robert Todd Carroll has written:
"Here are just a few possible explanations for data indicating significantly greater than chance results in psi experiments: selective reporting, poor experimental design, inadequate number of individuals in the study, inadequate number of trials in the experiment, inadequate number of experiments (e.g., drawing strong conclusions from single studies), file-drawer effect (for meta-studies), deliberate fraud, errors in calibration, inadequate randomization procedures, software errors, and various kinds of statistical errors. If any of the above occur, it is possible that the data would indicate performance at significantly greater than expected by chance and would make it appear as if there had been a transfer of information when there had not been any such transfer."
There is absolutely no evidence for psi. I understand parapsychologists are now hiding behind these meta-analysis but it is a fallacy. As Hyman wrote "Until parapsychologists can provide a positive way to indicate the presence of psi, the different effect sizes that occur in experiments are just as likely to result from many different things rather than one thing called psi." Unfortunately parapsychologists can't even define psi, it is considered to be "anything" above chance. In short both Kennedy and Chris Carter are wrong. The skeptics still have very much to say on this subject.
Posted by: Matt | February 15, 2014 at 12:36 PM
Matt (not Matt Rouge, obviously) wrote: "In short if there is any possibility of sensory leakage in a psi experiment then the experiment is automatically flawed, this has been the case with the majority of psi experiments (Hansel 1989)."
I disagree. In many cases, the "possibility" in question is a mere logical possibility for which there is no evidence at all. Hansel's (mostly worthless) book is a prime example of this kind of specious reasoning.
The objections to the ganzfeld and autoganzfeld experiment are equally specious. Videotape does not degrade that quickly; it would take hundreds or even thousands of replays to make any difference detectable to the human eye. There never was any evidence of fingerprints or other clues on the materials in the ganzfeld tests. And Honorton did not agree that the original ganzfeld tests were no good; he just decided to bend over backward to accommodate Hyman by adopting even more stringent criteria.
But for skeptics like Hyman, no criteria can ever be stringent enough; there is always the "logical possibility" of some kind of error somewhere. This is pathological skepticism, not serious analysis.
Hyman had promised that if the autoganzeld tests (conducted using his own protocols) succeeded, he would concede that psi had been proved. The tests did succeed, but Hyman refused to concede anything, citing unidentifiable errors that "might" have cropped up. This is intellectual dishonesty at its worst, and it fools nobody who doesn't want to be fooled.
As Carter correctly says, Hyman and the skeptics have lost the ganzfeld debate.
Posted by: Michael Prescott | February 15, 2014 at 03:37 PM
I will sign my name Matt P so not to be confused with the other Matt.
"Hyman had promised that if the autoganzeld tests (conducted using his own protocols) succeeded, he would concede that psi had been proved."
That's not the full story though. He wrote that the autoganzfeld would meet the standards "with the possible exception of proper randomization of targets during the sending and the judging procedures as well as the possibility of inadequate safeguards against sensory leakage."
Unfortunately parapsychologists have left that important quote out of all of their publications. Of course those possibilities were there that's why they can't be considered evidence for psi. If there's a possibility of a natural explanation (sensory leakage) in any psi experiment then automatically the experiment is not evidence for psi.
"I disagree. In many cases, the "possibility" in question is a mere logical possibility for which there is no evidence at all."
Evidence is important but it comes second to this. If there's a logical possible natural explanation for a "paranormal" or psi experiment, by default (Occam's razor) the natural explanation is preferred. That is the way science works and it's why psi will never ever been accepted by the scientific community.
Hansel has reviewed every single important psi experiment and every single one had flaws i.e. the possibility of sensory leakage was not ruled out. Now it doesn't matter if these things had no conclusive evidence, the possibility was still open thus invalidating the experiment.
For example Hubert Pearce in the famous telepathy experiment with Joseph Pratt was left in the library alone! With no controls and no observer. It was possible that he sneaked out of the library and into Pratt's room to observe the cards. It's also possible he used a secret accomplice. It's irrelevant what the result were because precautions were not in place to rule out sensory leakage. We will never know if he really cheated or not (he probably did) but the possibility remains that he could have, so automatically the experiment in not evidence for psi. Every psi experiment is like this. There's a natural possibility for every single one.
If psi is to be demonstrated then at 100% must all natural explanations or possibilities of sensory leakage be ruled out. Unfortunately your find such a thing is impossible. Anyway I respect your opinion and it's just best to agree to disagree. Good luck with the debate.
As for Chris Carter he has an early chapter defending the experiments of Henry Slade. Since I read that many years ago and Carter's defence of Slade I do not take Carter too seriously... Skeptics and believers even though we disagree should try and find things to agree on. I am still trying to find something I agree with Carter... I might find something one day still searching. Instead of Carter just calling us skeptics "pseudoskeptics" and militant atheists with a materialist agenda I would like to know which skeptics he has respect for... or anyone else on this blog for that matter. Skeptic's are nice if you get to know them :)
Posted by: Matt P | February 15, 2014 at 04:17 PM
Bruce,
||I think Matt has been a little quick to question your motives, so please help us to understand you better.||
I haven't so much questioned stated motives as pointed out that motives haven't been stated.
I do find his stance fishy, as well as his unwillingness to answer questions directly here.
I don't so much think that Jim is an "undercover skeptic" but instead someone who is trying to find an advantageous position in the "ecosystem" of this debate. I.e., a contrarian who nominally believes in psi and the paranormal but who nevertheless attacks the evidence for it and doesn't search for supporting evidence.
If so, I think that stance is illegitimate. If one believes in something, one should promote it to the degree one is able.
Posted by: Matt Rouge | February 16, 2014 at 08:02 AM
Thanks for better identifying yourself, Matt P.
Without getting into a long and fruitless debate, I'll just point out that if the "logical possibility" standard were used in other fields beside parapsychology, then drawing conclusions about anything would be impossible.
There is always a nonzero theoretical possibility that any given piece of information is wrong. This is true in courtroom trials, medical diagnoses, scientific experiments, and everyday life.
If all evidence points to OJ Simpson as the killer, there is still the logical possibility that someone else framed Simpson so elaborately that all the evidence seems to point to him.
If all evidence points to successful moon landings during the Apollo program, there is still the logical possibility that the whole thing was faked on a soundstage, and that astronomers around the world were in on the conspiracy.
If all evidence points to evolution, there is still the logical possibility that carbon dating is flawed and that the dinosaurs were swept away in the flood described in Genesis. (Biblical fundamentalists do argue this way.)
It's simply impossible to rule out every unsubstantiated, purely speculative objection.
After all, there is a nonzero logical possibility that my perceptions are hallucinations and that I am not even typing these words!
(See Carter's "Science and the Afterlife Experience" for a lengthy discussion of "logical possibility.")
Posted by: Michael Prescott | February 16, 2014 at 11:19 AM
If there's a logical possible natural explanation for a "paranormal" or psi experiment, by default (Occam's razor) the natural explanation is preferred. That is the way science works and it's why psi will never ever been accepted by the scientific community.
The parapsychologists have never claimed that psi is not natural, in fact it seems that psi is an integral part of psychology, for example, according to the theory of "first sight" of Carpenter. A logical possibility is only a possibility not self-contradictory. If some prefer to accept merely logical possibilities rather than accept the existence of psi is because of ideological reasons, not scientific reasons.
Hansel has reviewed every single important psi experiment and every single one had flaws i.e. the possibility of sensory leakage was not ruled out. Now it doesn't matter if these things had no conclusive evidence, the possibility was still open thus invalidating the experiment.
Your approach is atomistic: examine each of the experiments in isolation and find out if it contains a failure. I think this approach is a mistake, because if we examine the whole experiments and also the field studies of psychic research, ie, if we adopt a holistic approach, then we noticed patterns that could not exist if psi not exists, for example, people with great artistic talent are best psi subjects, psi manifests more intensely in emotional moments, etc., so that indicates the existence of psi.
Posted by: Juan | February 16, 2014 at 12:07 PM
Also, Juan, Hansel's claim that the possibility of sensory leakage was not ruled out is based on extremely unlikely and purely hypothetical scenarios. For instance, in one case (if I recall correctly – it's been some time since I read the book), he speculated that the test subject "could have" climbed up on the roof of the building and peered in through the skylight in order to observe the professor preparing the cards for a card-guessing experiment. As it turned out, this was not even possible; the blueprints Hansel relied on were out of date, and the building at that point no longer offered roof access. But even if it had been possible for the test subject to climb on the roof and engage in this behavior, why would anyone assume that he had done so when there is not a smidgen of evidence to support it? Only implacable resistance to the very idea of psi could explain such desperate reasoning. (Indeed, Hansel begins his book with the statement that since ESP is obviously impossible, there must be some other explanation for every positive test result.)
Similarly, Hansel suggests that another test subject could have propped up a ladder and peered through the transom above a professor's office door in order to spy on him. This assumes that the test subject would take the risk of being discovered standing on a ladder in the hallway of a building that housed many offices and was used by many professors and students. It also assumes that the test subject was so determined to cheat that he would go to ridiculous lengths to do so. And again, there is not one iota of evidence that any such chicanery actually took place.
It is just not possible to rule out "logical possibilities" like this, which is why nobody is asked to do so in any other area of science (or life). A skeptic using this type of method could always say that the test subject had hidden a camera in the office to record everything that happened, or that the professor had faked the results, or that the experiment never took place and was simply made up out of whole cloth. Etc., etc. At some point it should be obvious that we are dealing with rationalizations, not legitimate objections.
By the way, Hyman's caveat about "the possible exception of proper randomization of targets during the sending and the judging procedures as well as the possibility of inadequate safeguards against sensory leakage" amounts to nothing but weasel words. The whole point of the autoganzfeld protocol, which Hyman developed, was to rule out these objections. Hyman was merely giving himself wiggle room so he would never have to concede that he was wrong. Some people will go to any lengths to defend their position, even after the evidence has clearly turned against them. It's an ego thing. As Hyman himself reportedly said to Gary Schwartz, "I do not have control over my beliefs."
http://tinyurl.com/qx22br8
Posted by: Michael Prescott | February 16, 2014 at 02:20 PM
"Similarly, Hansel suggests that another test subject could have propped up a ladder and peered through the transom above a professor's office door in order to spy on him...."
Ridiculous. Hansel sounds like someone more suited to fairy tales than science.
Must be due to a protective denial arising from a childhood encounter with a witch. Has Gretel weighed in on any of this?
Posted by: no one | February 16, 2014 at 02:31 PM