(This blog post was originally posted on Animpossibleinvention.com)
The scientific newsroom of Sveriges Radio, the national Swedish Radio, has dedicated four months of research and a whole week of its air time to the story of Andrea Rossi, the E-Cat and cold fusion (part 1, 2, 3, 4), and I’m honored that it has made me one of its main targets.
The result, however, is not impressive.
Ulrika Björkstén, head of the scientific editorial staff, has chosen freelance journalist Marcus Hansson to do the investigation.
Hansson apparently likes easy solutions. Black or white. I won’t go into detail of his analysis of Rossi’s background since I have no reason to defend Rossi. I’m just noting that Hansson believes he can sort out the truth in the twinkling of an eye in Italy, which is known as one of the most corrupt countries in Europe where the mix of powerful interests, politics and the judiciary is not always easy to penetrate.
I’m also noting tendentious conclusions such as being sentenced to prison implies being an imposter, and non-proven claims such as storing toxic waste in leaking cisterns equals the Mafia’s way of dumping such waste in secret pits.
After his analysis of Rossi, Hansson adds a group of Swedish researchers and the Swedish power industry’s research entity Elforsk, depicting them all as a bunch of gullible fools being used by Rossi for his purposes, and pointing at me as the one who got them involved in the first place. I’m flattered.
Hanson considers all this obvious, basing large parts of his report on the testimonials and opinions of Italian-French writer Sylvie Coyaud, scientific blogger for the weekly Italian style magazine D-La Repubblica.
But all this is only half of the problem.
Hansson starts his reportage by stating that the famous claim by Fleischmann and Pons in 1989, of excess heat compatible with a nuclear reaction, was wrong and later explained by erroneous measurements.
I believe he’ll find that hard to prove, given that there in 2009 were 153 peer-reviewed papers describing excess heat in experimental set-ups such as the one used by Fleischmann and Pons. And that’s only one of many reasons.
I discuss this in the beginning of my book. Hansson says he read the book and found it to be a tribute to Rossi. Coyaud says it’s a story where Rossi is Messiah and I am the Prophet. That’s poetic, but it’s an opinion.
Among those hundreds who have read it, about fifty persons have written reviews, most of them giving it the highest vote. A series of highly competent people with insight in the story thought it was well balanced.
I do discuss Rossi’s problematic background in the book, and when that’s done I discuss his problematic personality.
But the main focus I have chosen is another, reflecting the title of the book, discussing what is considered to be impossible and asking why more resources aren’t dedicated to investigating this strange phenomenon that could possibly change the world, providing clean water and clean air, saving millions of lives and solve the climate crisis.
Not because I wish this to be true, but because there are abundant scientific results indicating that the phenomenon might be real.
It’s insane that curious researchers are hesitating to enter this field for fear of ruining their careers (yes Björkstén, this is why most of them are old), and it’s insane that poorly researched media reports like this help scientific critics to continue attacking those researchers.
Marcus Hansson says he has read my book, but maybe he hasn’t understood what he read. In fact I’m worried that neither he nor Coyaud have the competence to evaluate this complex story from a scientific perspective. I might be wrong, but from Hansson’s reportage I’m not convinced.
What I find more problematic though is the position of Ulrika Björkstén, head of the scientific editorial staff at Sveriges Radio, holding a Ph.D. in physical chemistry. I agree with most observers that it’s not proven whether Rossi’s E-Cat works or not, and Björkstén might of course be convinced that it’s not working.
But in a concluding comment Björkstén discards the whole area of cold fusion/LENR as pseudo-science, stating that it is based on belief and group thinking, and that university researchers should discern such research from real science and stay away from it.
I find this alarming both from a journalistic and a scientific point of view. Such opinions have often been expressed regarding disruptive discoveries, and if we took advice only from people like Björkstén we would probably not have any airplanes or semiconductors today.
I welcome serious critic of my reports and of my book, but this reportage does not qualify. I’m not impressed, and I hope that the next scientific news team that decides to evaluate this story and my book will set the bar higher.
You might agree with me or not. If you have an opinion, I would suggest that you write an email to Ulrika Björkstén who oversaw the production of this reportage. Marcus Hansson probably just did his best.
- – – -
N.B. This is my personal opinion and not a statement from Ny Teknik. UPDATE: Here’s an official op-ed by Ny Teknik’s chief-editor Susanna Baltscheffsky. And here’s a piece by the Swedish researchers who have been involved in tests.
(This blog post was originally posted on Animpossibleinvention.com)
The measurement setup that was used by Defkalion Green Technologies (DGT) on July 23, 2013, in order to show in live streaming that the Hyperion reactor was producing excess heat, does not measure the heat output correctly, and the error is so large that the reactor might not have worked at all.
This is the conclusion of a report (download here) by Luca Gamberale, former CTO of the Italian company Mose srl that at that time was part of the joint venture Defkalion Europe, owned together with DGT.
The report is based on experiments, performed mainly after the live streaming, using the same setup but without the reactor being active. Yet, the experiments showed that it was possible to obtain a measured thermal power of up to about 17 kW, while the input electric power was about 2.5 kW.
I asked Gamberale if this erroneous result could have been present without DGT realizing it.
“To obtain this effect it’s necessary to operate two valves in a certain way, so you need to have the intention to do it,” Gamberale told me.
Those of you who have read my book ‘An Impossible Invention’ know that Defkalion was an early partner to Rossi, supposed to build applications using Rossi’s reactor as a heat source. When Rossi ended the agreement with Defkalion in August 2011, Defkalion stated that operations continued, and later Defkalion claimed to have developed its own similar technology, producing heat from a reaction involving nickel and hydrogen.
Test results and measurement data were never disclosed, but in July 2013 Defkalion finally decided to make a public demo, live streamed during the cold fusion conference ICCF 18. I was present at the demo on July 23 in Milan, Italy, and referred my impressions in two blog posts here and here, trying to be as objective and neutral as possible, since I believe that my readers should draw their own conclusions.
“If you believe the values presented…”, I wrote, and that was also the main problem. It was not easy in a short time frame to verify possible errors or hidden mechanisms, specifically since Defkalion didn’t accept changes in the setup, and therefore it was not evident that you should believe the values. I reported them as presented though.
Gamberale describes in the report that before the demo, Mose had proposed a series of improvements to the measurement setup in order to make it more reliable but that DGT did not allow these changes. He notes that the lack of cooperation made it necessary to carry out independent verification tests.
The tests focused on a possible malfunction of the digital flow meter used to measure water flow in the setup. It was shown that by decreasing the input water flow to almost zero, the flow meter started to make fast movements back and forth, and since the direction of the flow was not registered by the flow meter, these fast movements resulted in a reading corresponding to a relatively high flow, although the flow was almost zero.
Since the calculation of thermal heat was based on how much water was heated by the reactor, this measurement error resulted in a large calculated thermal heat output, while the actual thermal heat was much lower.
The explanation is thoroughly discussed in the report. Most important, however, is the fact that Gamberale with the experiment has proved that the setup could produce readings of large amounts of excess heat, without the reactor running, and that any result from the setup showing excess heat therefore is unreliable.
Gamberale explained to me that he presented these findings to Defkalion’s president Alexander Xanthoulis, and to Defkalion’s engineer Stavros Amaxas who was operating the setup at the public demo.
According to Gamberale, Xanthoulis said “Ok, we don’t know, this could be possible, but in any case we are sure that the reaction exists”.
Gamberale described Amaxas’ reaction to be much stronger. Defkalion’s CTO John Hadjichristos was not present at that meeting.
In his report, Gamberale also notes that Mose srl has given DGT some time to provide evidence that its technology is real, despite the findings presented, but that after several months, no answer has been given.
As I write in my book, Gamberale and the president of Mose srl, Franco Cappiello, who told me that he had invested €1 million in the joint venture, decided to put all commercial activity on hold until Defkalion could carry out a measurement that dispelled their doubts. They later closed Defkalion Europe altogether.
I called Alexander Xanthoulis and asked for a comment. He didn’t dispute the result of the report but pointed out that the calorimetric set-up at the Milan demo was not made by Defkalion but by Mose. Gamberale confirmed this but explained that the set-up was made according to strict instructions from Defkalion, and that when Mose added some component, such as another independent flow meter or another method for measuring thermal heat output, these additional components were immediately removed by Defkalion personel without discussions.
Xanthoulis also said that he didn’t understand why Gamberale hadn’t asked these questions earlier during months of contacts and visits by Mose at Defkalion’s offices in Canada, and by Defkalion in Milan. Gamberale explained that he had tried to get the information he needed but that he was never allowed to make the measurements he asked for. Instead he described his role as one of an observer.
Finally Xanthoulis pointed out that the flow calorimetry measurements (measurement of thermal energy output by heating flowing water) were not important, but that the most important measurements were on the bare reactor, calculating the output thermal energy by measuring temperatures on various points of the reactor without heating any water (you then use a law called Stefan–Boltzmann law). He told me that these measurements had been sent to Gamberale twice.
“He sent an Excel spreadsheet with no explanation including a couple of incomprehensible graphs in which it was not even written what it was about. I felt almost offended. I’m asking a justification of an abnormal result regarding a claim of a nuclear reaction that would change the history of the world, and I get an Excel sheet without any specification of what it is,” Gamberale commented.
I got the spreadsheets from Gamberale. They contain temperature measurements in degrees Celsius on various points of the reactor and can be downloaded here (sheet 1 and sheet 2). I know they are accurate since Xanthoulis sent me one identical document, asking me not to publish it.
I have studied Gamberale’s report and I find it both detailed and convincing. It should make Defkalion’s case difficult.
Gamberale doesn’t accuse Defkalion openly for fraud, but he makes it clear that the Milan demo presented no evidence that the technology is working.
The doubts I have had towards Defkalion, described in my book, are obviously increased through the report. Some wondered about the uncertainty regarding Defkalion’s technology that I expressed recently in an interview by John Maguire at Q-niverse. One important reason was Gamberale’s report, which I had already received by then.
And while I write in the last chapter of the book that it’s hard to assess Defkalion, but that if its claims can be trusted, Defkalion might have made the most progress among those working with LENR technology based on nickel and hydrogen, I now find it less likely.
Alexander Xanthoulis still claims, however, that the development of the new reactor is on track and that according to the plans it will be certified with regard to safety and security by a Canadian certifying body corresponding to US Underwriters’ Laboratory within the next months. After that, Defkalion could start licensing the technology to partners. National licenses were previously offered at EUR 40.5 million, and though Xanthoulis told me that five contracts have been signed he also said that no money had yet been transferred.
But Defkalion will now have to present solid evidence to convince anyone that its technology is valid, and also let those people make changes to the test protocol and to the measurement set-up, if it’s necessary in order to eliminate uncertainties.
Gamberale told me that the findings he describes in the report could bring damage to serious research activities within LENR, but he also told me that he personally still believes that LENR is an important scientific and technological area and that he is getting involved in two other projects in this domain.
(Added on May 16): Gamberale has a PhD in theoretical high energy physics from the University of Milan, and at the Milan based Pirelli Labs he has further developed the theoretical work in coherent electrodynamics by his countryman, late Dr. Giuliano Preparata. Among his experimental work he has been assessing the technology of Black Light Power. He has also made studies on electrochemical loading of palladium wires.
(This blog post was originally posted on Animpossibleinvention.com)
As those of you who have already read my book ‘An Impossible Invention’ know, it’s written in memory of Martin Fleischmann (1927 – 2012), Sergio Focardi (1932 – 2013) and Sven Kullander (1936 – 2014). All these three persons were important for my work, and they all left us while I was working on the book.
Sadly enough, several other researchers within the field of LENR and cold fusion passed away during the same period, and I would like to commemorate them too in this post (click on their names to get further information about their lives and their careers):
Again, if LENR/cold fusion turns out to be an important energy source that might bring fundamental change to the world, which you probably know by now that I personally believe, none of these researchers were ever recognized for their important contributions to the knowledge in this field.
If my book can contribute to raising public attention for LENR, and increase the possibilities to build on these researchers work in order to find out as soon as possible if there’s a way to make this technology useful for humanity, I would be more than happy.
So far I have been overwhelmed by the response to the book. Many have given me strong support, for which I’m very grateful, and a few have criticized me, which has given me the opportunity to go through the arguments for bringing this story to public awareness.
Nobel Laureate Brian Josephson made a short review of the book at Nature.com, and you can read his review on the start page of Animpossibleinvention.com.
Frank Acland at E-Cat World made an interview with me, which is published here.
Several persons have written reviews that you can find at the book shop An Impossible Invention — Shop (you’ll find the reviews under each version of the book).
An intense discussion has been going on on my personal blog — “The Biggest Shift Ever”.
And many of you have emailed me directly with wonderful personal support. Thanks!
I’ve also found a few errors which have now been corrected in the e-book version:
The Italian words cappuccino and colazione were misspelled, as was the name of the road Viale Fulvio Testi in Milan, and also the name of the Italian steel mill company Falck (which I at one occasion called Salk). Due to an error in translation from Swedish, I put a binocular in the hands of Galileo Galilei, but of course he used a telescope.
As you know, this story is still unfolding and I’m receiving information that I will share in this blog, and that will also be added to both the ebook and the paperback in upcoming editions.
(This blog post was originally posted on Animpossibleinvention.com)
For three difficult years I have experienced much that I wanted to discuss, that I had thought people would want to investigate and understand better. Yet reaching out has been difficult for me. I want you, the reader, to comprehend, forgive and then participate.
The term ‘cold fusion’ is so stigmatized that everything even vaguely connected with it is ignored by media outlets in general and by the science community in particular. Unless it’s attacked. Meanwhile we might be missing an opportunity to change the world.
That’s why I’m relieved today, when I can finally share this story in my new book An Impossible Invention. It’s about, yes, cold fusion.
It’s actually two stories. One story in the book is about cold fusion itself, about the inventor Andrea Rossi and his energy device the ‘E-Cat,’ about the people around him and about how I became involved and subsequently investigated and contributed to a series of on-going events in this scientific arena.
The other story in the book is about how people relate to the unknown, to the mysterious, to the improbable and to what we believe is ‘impossible.’ The story of how new ideas are accepted or rejected, of whether one is curious or uninterested, open-minded or prejudiced.
The book may reveal events surrounding Rossi and the E-Cat. It should inspire some readers and upset others. I hope it will provoke discussions—lots of discussions, among other things about what’s impossible or isn’t. Consider what the British runner Roger Bannister—the first human to run a sub-four-minute mile, previously believed impossible—perceptively stated: “The human spirit is indomitable.”
Who knows what will happen? More is to come. You, the reader, will play an important role in determining how these matters evolve.
By the way–just as I’m writing these words I’m receiving new information on events that strengthen some pieces of the story in the book, and also some information that add to my doubts regarding certain stakeholders. I cannot tell you more right now, but I will keep you updated in this blog and in the free newsletter of the book.
In 2013, Google acquired eight companies specializing in robotics, and many have asked what Google will do with all those robots.
The eighth company wa
s Boston Dynamics, which through funding by DARPA has developed a couple of high-profile animal-like robots and the two-legged humanoid Atlas.
A week after that acquisition, Google became the world’s robot king when one of the companies previously bought won the final trials of the DARPA Robotics Challenge–a competition where robots are expected to manage tasks like climbing a ladder, punch a hole through a wall, drive a car and close valves. Second was a team that used the Atlas robot.
Google’s interest in robots should be seen in the broader context of its other ventures–everything from the digital glasses Google Glass and driverless vehicles to its established services–web search, maps, Street View, videos, the Android OS, web-based office applications and Gmail. Plus the latest big acquisition at $3.2 billion–Nest Labs, that develops the self-learning thermostat Nest and the connected smoke detector Protect.
The common denominator is data. Large amounts of data about what users are doing and thinking, about where they go and what the world looks like.
It fits the robot venture. As robots are becoming more capable they will perform increasingly sophisticated tasks and gradually take over many jobs from humans. During their work, they will collect huge amounts of data, about everything , everywhere in the world.
It is not obvious that Google will have access to all this data. Nest for example, has made it clear that the company’s policy on privacy remains firm after the takeover, and that data from thermostats may only be used to ‘improve products and services’.
But Google has repeatedly demonstrated its ability to offer attractive free services where users willingly share their data in exchange for the service.
Added to this is Google’s focus on learning machines and advanced artificial intelligence — most recently through the acquisition of the British AI company Deep Mind for over $2 billion, and also through the recruitment of futurist and entrepreneur Ray Kurzweil as chief engineer last year (Ray Kurzweil’s latest book is called How to Create a Mind).
If it is possible to develop an artificial consciousness in a machine, one may ask how far such a consciousness reaches. One way to respond–which I touched in this post–is to relate to a human being that reaches as far as her body and its senses. An artificial consciousness would then by analogy be limited to the sensors it controls in order to collect data.
Google is then in a good position. And though I don’t believe that Google has any evil plans at all, this scares me far more than the surveillance in which NSA and other intelligence agencies are engaged, combined.
Interception and surveillance will never give nearly as much data about us as Google can get, and it can be regulated. What Google will do with all the data that we willingly share is something no-one else can control.
(This post was also published in Swedish in Ny Teknik).
I already outlined the ideas of author and entrepreneur Ray Kurzweil, currently Engineering Director at Google, on exponentially accelerating technological change. His ideas are based on what he calls the Law of Accelereting Returns — the fairly intuitive suggestion that whatever is developed somewhere in a system, increases the total speed of development in the whole system.
The counter intuitive result of this is an exponentially increasing pace, which on the other hand is supported by observations; at this moment the pace of development doubles about each decade, leading to a thousandfold increase in this century compared to the last.
I have also discussed the thoughts of Kevin Kelly described in his book What Technology Wants. Kelly suggests, i line with Kurzweil, that technological development is a natural extension of biological evolution, keeping up the exponential pace that can be observed all the way from single celled organisms (although you could discuss whether DNA actually has had the time to evolve on Earth).
I find also Kelly’s suggestion intuitive. If you consider spoken language as one of man’s first technological inventions, you could ask if it’s not so intimately linked to the human brain that it could be regarded as part of the evolution. Spoken language is a grey zone between evolution and technology that high lights the links between them and their dependence on each other — both having a similar nature if you see them as a whole and if you look beyond the molecules and atoms they are made out of.
This leads to a concept that I have been surprised to observe as being hardly mentioned before — The Survival of the Fittest Technology.
It’s the idea that technological inventions obey the same rules as evolutionary steps in nature. Only the most fit (best adapted, best conceived) inventions will reach the market and gain massive support and usage among people and thus survive and be subject to further development, refinement and combination with other technologies.
This idea is intimately linked to what the biologist and researcher Stuart Kaufmann calls the adjacent possible—that new inventions are based on fundamentals and skills already in place–a concept that the author Steven Johnson develops in the book Where Good Ideas Come From, The Natural History of Innovation (2010):
“The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.”
Some of the adjacent possibles are inherently strong and more fit than others. When you already have the telephone, the idea to make it cordless and then mobile is so natural and strong that it just cannot avoid being realized. Different details in the development of the mobile phones are equally exposed to the survival of the fittest, defining the path to a robust and useful technological solution.
What you could ask, and what has been discussed by several people, is whether there are multitudes if different paths that evolution and technology development could take, or if the adjacent possible and the survival of the fittest have so strong inherent patterns that there’s basically only one way with small variations. This would mean that if we would replay everything from the Big Bang, the result would be essentially the same.
Kevin Kelly also discusses along these lines. He suggests that there’s a third driving mechanism behind evolution, besides random changes/mutations and natural selection/survival of the fittest. The third vector is structure, inevitable patterns that form in complex systems due to e.g. physical laws and geometry.
He then proposes that technological development is based on a similar triad where the natural selection is replaced by human free will and choice.
I like this link between evolution and technology. But I believe that it’s the random change that is replaced by, or at least mixed with human free will and choice. Accidents and random changes happen, but the function of mutations in nature would largely correspond to human’s intentional design of technology, changing different aspects at will.
My point is, however, that natural selection is not replaced by human choice. It is as present in technological development as in biological evolution. Although the survival of the fittest technology is a result of human choice and free will, it’s a sum of many individuals’ choices, a collective phenomenon, that is not possible to control by any single mind.
And therefore Survival of the Fittest Technology appears to be what survival of the fittest is in nature–an invention/species being exposed to a complex and interacting environment where only the best conceived and best adapted thrive.
We just released a fresh issue of the Swedish forward looking digital magazine Next Magasin, for which I am the managing editor, this time focusing on cyborgs. The main feature reportage by journalist Siv Engelmark is a fascinating journey through the aspects of our use of technology to make humans into something more than humans.
Engelmark has been talking to cognition scientists, litterateur scientists, philosophers, pioneers and futurist, trying to find out what the consequences of human enhancement are and what people think of it.
Apart from the feature story there are several interesting pieces on subjects such as electronic blood, biomimetics with bumblebees, brain controlled vehicles, space buildings on Earth, sensor swarms, disruption of healthcare, synthetic biology, teleportation and more.
By the way — I forgot to post the release of issue number 2 back in June 2013. Journalist Peter Ottsjö wrote an eye opening feature story on virtual worlds which are not, as you may think, just some old rests of Second Life.
Instead, virtual worlds are today developing into a rich series of opportunities for both professionals and consumers, and they’re bound to take a larger part in our life than most people realize, bringing significant changes to our way of living.
On the front page you can see the Japanese mega star Hatsune Miku who is all virtual — a virtual synthesizer voice for which fans can write songs, being performed by a projection of the virtual artist on real arenas with thousands of people watching the performance.
In a decade or two, the physical world will just be a sub set of our lives.