We now have more than double the data we had last year that should be enough to see whether the trends we were seeing in the 2011 data are still there, or whether they've gone away. It's a very exciting time.and of James Gillies, CERN spokesman (Not Even Wrong):
Combining the data from two experiments is a complex task, which is why it takes time, and why no combination will be presented on Wednesday.And today, a preprint is published by ATLAS: Combined search for the Standard Model Higgs boson in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector
A combined search for the Standard Model Higgs boson with the ATLAS detector at the LHC is presented. The datasets used correspond to integrated luminosities from 4.6 $fb^-1$ to 4.9 $fb^-1$ of proton-proton collisions collected at sqrt(s) = 7 TeV in 2011. The Higgs boson mass ranges of 111.4 GeV to 116.6 GeV, 119.4 GeV to 122.1 GeV, and 129.2 GeV to 541 GeV are excluded at the 95% confidence level, while the range 120 GeV to 560 GeV is expected to be excluded in the absence of a signal. An excess of events is observed at Higgs boson mass hypotheses around 126 GeV with a local significance of 2.9 standard deviations (sigma). The global probability for the background to produce an excess at least as significant anywhere in the entire explored Higgs boson mass range of 110-600 GeV is estimated to be ~15%, corresponding to a significance of approximately one sigma.I must rember you that, to declare a discover of a new particle, the result must be released with 5 sigma, and ATLAS data is given with 2.9 sigma(1).
We are really near to the Higgs boson, but we don't have the certainty, so I think that tomorrow nobody say We have discovered the Higgs boson (but I could be wrong).
(1) The data distribution are usually a gaussian. Sigma indicates the percentage of data that I must expect if I perform an experiment in that range. In other words, if I produce a data $m$ with 1 sigma, I say that my collection cover 67% of the statistical distribution, or I know very well only the 67% of the distribution. If I produce a data $m$ with 3 sigma, I say that my collection cover 99,7% of the statistical distribution, or I know very well the 99.7%of the distribution, but if i would the certainty to explain correctly $m$, I must produce the result with 5 sigma, in order to cover the 99.9999...% of the distribution.
Atthis point, an interesting objection is that: with 3 sigma we have 99.7% that is nearest to 100%, but in the 0.3% of the data distribution could be something that could completely change the result. In a trivial way, we can imagine sigma like the ability to read one specific range of energy: if we read all the range, what we can see is what there is, without doubt.