Skip to content

Here’s my book on cold fusion and the E-Cat

(This blog post was originally posted on


AII_cover_eng_200pxFor three difficult years I have experienced much that I wanted to discuss, that I had thought people would want to investigate and understand better. Yet reaching out has been difficult for me. I want you, the reader, to comprehend, forgive and then participate.

The term ‘cold fusion’ is so stigmatized that everything even vaguely connected with it is ignored by media outlets in general and by the science community in particular. Unless it’s attacked. Meanwhile we might be missing an opportunity to change the world.

That’s why I’m relieved today, when I can finally share this story in my new book An Impossible Invention. It’s about, yes, cold fusion.

It’s actually two stories. One story in the book is about cold fusion itself, about the inventor Andrea Rossi and his energy device the ‘E-Cat,’ about the people around him and about how I became involved and subsequently investigated and contributed to a series of on-going events in this scientific arena.

The other story in the book is about how people relate to the unknown, to the mysterious, to the improbable and to what we believe is ‘impossible.’ The story of how new ideas are accepted or rejected, of whether one is curious or uninterested, open-minded or prejudiced.

The book may reveal events surrounding Rossi and the E-Cat. It should inspire some readers and upset others. I hope it will provoke discussions—lots of discussions, among other things about what’s impossible or isn’t. Consider what the British runner Roger Bannister—the first human to run  a sub-four-minute mile, previously believed impossible—perceptively stated: “The human spirit is indomitable.”

Who knows what will happen? More is to come. You, the reader, will play an important role in determining how these matters evolve.

By the way–just as I’m writing these words I’m receiving new information on events that strengthen some pieces of the story in the book, and also some information that add to my doubts regarding certain stakeholders.  I cannot tell you more right now, but I will keep you updated in this blog and in the free newsletter of the book.

Google’s goal: To control the world’s data

The humanoid Atlas, developed by Boston Dynamics.

The humanoid Atlas, developed by Boston Dynamics.

In 2013, Google acquired eight companies specializing in robotics, and many have asked what Google will do with all those robots.

The eighth company wa

s Boston Dynamics, which through funding by DARPA has developed a couple of high-profile animal-like robots and the two-legged humanoid Atlas.

A week after that acquisition, Google became the world’s robot king when one of the companies previously bought won the final trials of the DARPA Robotics Challenge–a competition where robots are expected to manage tasks like climbing a ladder, punch a hole through a wall, drive a car and close valves. Second was a team that used the Atlas robot.

Google’s interest in robots should be seen in the broader context of its other ventures–everything from the digital glasses Google Glass and driverless vehicles to its established services–web search, maps, Street View, videos, the Android OS, web-based office applications and Gmail. Plus the latest big acquisition at $3.2 billion–Nest Labs, that develops the self-learning thermostat Nest and the connected smoke detector Protect.

The common denominator is data. Large amounts of data about what users are doing and thinking, about where they go and what the world looks like.

It fits the robot venture. As robots are becoming more capable they will perform increasingly sophisticated tasks and gradually take over many jobs from humans. During their work, they will collect huge amounts of data, about everything , everywhere in the world.

It is not obvious that Google will have access to all this data. Nest for example, has made ​​it clear that the company’s policy on privacy remains firm after the takeover, and that data from thermostats may only be used to ‘improve products and services’.

But Google has repeatedly demonstrated its ability to offer attractive free services where users willingly share their data in exchange for the service.

Added to this is Google’s focus on learning machines and advanced artificial intelligence — most recently through the acquisition of the British AI company Deep Mind for over $2 billion, and also through the recruitment of futurist and entrepreneur Ray Kurzweil as chief engineer last year (Ray Kurzweil’s latest book is called How to Create a Mind).

If it is possible to develop an artificial consciousness in a machine, one may ask how far such a consciousness reaches. One way to respond–which I touched in this post–is to relate to a human being that reaches as far as her body and its senses. An artificial consciousness would then by analogy be limited to the sensors it controls in order to collect data.

Google is then in a good position. And though I don’t believe that Google has any evil plans at all, this scares me far more than the surveillance in which NSA and other intelligence agencies are engaged, combined.

Interception and surveillance will never give nearly as much data about us as Google can get, and it can be regulated. What Google will do with all the data that we willingly share is something no-one else can control.

(This post was also published in Swedish in Ny Teknik).

Survival of the Fittest Technology

I already outlined the ideas of author and entrepreneur Ray Kurzweil, currently Engineering Director at Google, on exponentially accelerating technological change. His ideas are based on what he calls the Law of Accelereting Returns — the fairly intuitive suggestion that whatever is developed somewhere in a system, increases the total speed of development in the whole system.

The counter intuitive result of this is an exponentially increasing pace, which on the other hand is supported by observations; at this moment the pace of development doubles about each decade, leading to a thousandfold  increase in this century compared to the last.

I have also discussed the thoughts of Kevin Kelly described in his book What Technology Wants. Kelly suggests, i line with Kurzweil, that technological development is a natural extension of biological evolution, keeping up the exponential pace that can be observed all the way from single celled organisms (although you could discuss whether DNA actually has had the time to evolve on Earth).

I find also Kelly’s suggestion intuitive. If you consider spoken language as one of man’s first technological inventions, you could ask if it’s not so intimately linked to the human brain that it could be regarded as part of the evolution. Spoken language is a grey zone between evolution and technology that high lights the links between them and their dependence on each other — both having a similar nature if you see them as a whole and if you look beyond the molecules and atoms they are made out of.

This leads to a concept that I have been surprised to observe as being hardly mentioned before — The Survival of the Fittest Technology.

It’s the idea that technological inventions obey the same rules as evolutionary steps in nature. Only the most fit (best adapted, best conceived) inventions will reach the market and gain massive support and usage among people and thus survive and be subject to further development, refinement and combination with other technologies.

This idea is intimately linked to what the biologist and researcher Stuart Kaufmann calls the adjacent possible—that new inventions are based on fundamentals and skills already in place–a concept that the author Steven Johnson develops in the book Where Good Ideas Come From, The Natural History of Innovation (2010):

“The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.”

Some of the adjacent possibles are inherently strong and more fit than others. When you already have the telephone, the idea to make it cordless and then mobile is so natural and strong that it just cannot avoid being realized. Different details in the development of the mobile phones are equally exposed to the survival of the fittest, defining the path to a robust and useful technological solution.

What you could ask, and what has been discussed by several people, is whether there are multitudes if different paths that evolution and technology development could take, or if the adjacent possible and the survival of the fittest have so strong inherent patterns that there’s basically only one way with small variations. This would mean that if we would replay everything from the Big Bang, the result would be essentially the same.

Kevin Kelly also discusses along these lines. He suggests that there’s a third driving mechanism behind evolution, besides random changes/mutations and natural selection/survival of the fittest. The third vector is structure, inevitable patterns that form in complex systems due to e.g. physical laws and geometry.

He then proposes that technological development is based on a similar triad where the natural selection is replaced by human free will and choice.

I like this link between evolution and technology. But I believe that it’s the random change that is replaced by, or at least mixed with human free will and choice. Accidents and random changes happen, but the function of mutations in nature would largely correspond to human’s intentional design of technology, changing different aspects at will.

My point is, however, that natural selection is not replaced by human choice. It is as present in technological development as in biological evolution. Although the survival of the fittest technology is a result of human choice and free will, it’s a sum of many individuals’ choices, a collective phenomenon, that is not possible to control by any single mind. 

And therefore Survival of the Fittest Technology appears to be what survival of the fittest is in nature–an invention/species being exposed to a complex and interacting environment where only the best conceived and best adapted thrive.

Issue number 3 of Next Magasin focusing on cyborgs

Next Magasin 3

Next Magasin 3

We just released a fresh issue of the Swedish forward looking digital magazine Next Magasin, for which I am the managing editor, this time focusing on cyborgs. The main feature reportage by journalist Siv Engelmark is a fascinating journey through the aspects of our use of technology to make humans into something more than humans.

Engelmark has been talking to cognition scientists, litterateur scientists, philosophers, pioneers and futurist, trying to find out what the consequences of human enhancement are and what people think of it.

Next Magasin 2

Next Magasin 2

Apart from the feature story there are several interesting pieces on subjects such as electronic blood, biomimetics with bumblebees, brain controlled vehicles, space buildings on Earth, sensor swarms, disruption of healthcare, synthetic biology, teleportation and more.

The magazine is subscribed and can be downloaded as an app for IOS or Android, or to be read on pc.

By the way — I forgot to post the release of issue number 2 back in June 2013. Journalist Peter Ottsjö wrote an eye opening feature story on virtual worlds which are not, as you may think, just some old rests of Second Life.

Instead, virtual worlds are today developing into a rich series of opportunities for both professionals and consumers, and they’re bound to take a larger part in our life than most people realize, bringing significant changes to our way of living.

On the front page you can see the Japanese mega star Hatsune Miku who is all virtual — a virtual synthesizer voice for which fans can write songs, being performed by a projection of the virtual artist on real arenas with thousands of people watching the performance.

In a decade or two, the physical world will just be a sub set of our lives.

Here are three good reasons to have a look at cold fusion

Stanley Pons and Martin Fleischmann with their reactor cell.

Stanley Pons and Martin Fleischmann with their reactor cell.

Ever since Martin Fleischmann and Stanley Pons presented their startling results in 1989, claiming that they had discovered a process that generated anomalously high amount of thermal energy, possibly through nuclear fusion at room temperature, cold fusion has been rejected by the mainstream scientific community.

For anyone open to believe the contrary, here are three good reasons (remember that cold fusion would be a clean, inexpensive and virtually inexhaustible energy source that would use a gram of hydrogen to run a car for a year):

1. Lessons from cold fusion archives and from history.
A comprehensive outlook on the field of cold fusion, including references to papers with specific instructions for anyone who would like to reproduce the Fleischmann and Pons effect (explaining why it is so difficult).  Presented at the cold fusion conference ICCF-18, 2013, by Jed Rothwell who runs — an online library with documents and papers regarding cold fusion.

2. The Enabling Criteria of Electrochemical Heat: Beyond Reasonable Doubt
A paper from 2008 by Dennis Cravens and Dennis Letts, indicating four criteria for reproducing the Fleischmann and Pons effect. Cravens and Letts had gone through 160 papers concerning generation of heat from the F&P effect, and found four criteria correlated to reports of successful experiments, whereas negative results could be traced to researchers not fulfilling one or more of those conditions.

3. A brass ball remaining four degrees warmer than another.
An elegantly designed experiment by Dennis Cravens, performed recently at NI Week 2013, where two brass balls were resting in a bed of aluminum beads at constant temperature. Yet, one of the brass balls, containing another kind of experimental set-up with similar materials as in Fleischmann’s and Pons’ experiment, remained four degrees warmer than the bed and the other ball, with no external energy input. This is not a replication of the F&P effect, but indicates that the process can be implemented in different forms (gas loaded instead of electrolysis).

Please add a comment if you have any other comprehensive and convincing document to suggest, regarding cold fusion or LENR (Low Energy Nuclear Reactions).

Update on Defkalion’s reactor demo in Milan

(This update comes a little bit late, I apologize for that).

Defkalion's reactor enclosed in ceramics and a metal casing. In the background Alex Xanthoulis and John Hadjichristos. Photo: Mats Lewan

Defkalion’s reactor enclosed in ceramics and a metal casing. In the background Alex Xanthoulis and John Hadjichristos. Photo: Mats Lewan

Defkalion’s reactor demo in Milan in July has been discussed extensively. A series of concerns have been raised, among them for the flow measurement not being accurate and for the flow of steam output into the sink being weaker than what could be expected.

Regarding the steam flow I already said that I regret not having opened the valve leading straight down towards the floor (the one we used when calibrating the water flow) to get a visual observation of the steam flow. I have later understood that others have asked to do the same thing but that Defkalion declined, arguing that opening that valve would disturb the equilibrium in the system.

After the demo I sent a couple of follow up questions to Defkalion’s chief scientist, John Hadjichristos, and I would like to share his answers here.

Mats: A Faraday cage only shields from electric fields, not magnetic fields. Can you discuss further how the strong magnetic fields you mentioned, reaching 1.6 Tesla, were shielded?

Hadjichristos: First of all we wish to clarify that the reported magnetic anomalies values relate to peak measurements. Shielding of such “noise” is done using mu metal materials and solenoids during tests having the declared objectives as in the protocol submitted to ICCF18. I apologize for the technically not correct use of the terms “cage” or “Faraday cage” as used in our internal lab jargon.

From a reader: At the time from 21:10 till 21:33 the output temp raised from 143°C to 166°C. But inner reactor temp was all the time constant at 355°C-358°C and coolant flow was 0,57 – 0.59 liter/min also constant. Is there any explanation for this phenomenon?

Hadjichristos: When coolant is in dry steam condition, flow is not constant. A pressure barrier within the coil surrounding the reactor creates flow flactuations that result to such ‘strange’ thermal behavior of coolant during the aforsaid period, srongly related also with stored energy in reactor’s metals. This can be easily explained noting also:

As I explained live during the demo, the flow measurement algorithm in our Labview software uses the slope (first derivative) of the plot of the  reported fn pulses from the flow meter and not the n/(1/f1+1/f2+…+1/fn) or the more common in use (f1+f2+…+fn)/n methods, as the later are very sensitive leeding to huge systematic errors and wrong calorimetry results due to such fluctuations  when occurred. The consequence “cost” of the method we use is the delay on the reported values on screen,  which obviously does not influence  the total  energy output calculations with any “noise” as all fn values are used, whilst all thermometry measurements  are “quicker”  reporting “on screen”. All such 3 flow calculation methods from the flow meter’s signals give indentical instant flow measurement results only when f1=f2=…=fn aka when no steam pressure blocks water to flow from the grid smoothly.

Thanks to your reader bringing up this, not very much commented/analyzed in blogs,  issue on flow wrong algorithms in use in similar calorimetry configurations.

Mats: Could you tell me which other external persons/validators were supposed to come and why they didn’t come?

Hadjichristos: No.

Mats: The sink where the steam was output, was it a normal sink with an open hole in the bottom leading to the ordinary drainage network, or was there any active venting, e.g. a fan, drawing gas down the sink? Could you also tell me the inner diameter of the steam outlet tube?

Hadjichristos: There was not any active venting to or in the drainage. The output pipe driving the steam to the drainage network was a 1/2″ diameter cooper pipe (not thermal insulated after the Tout thermocouple) whilst the PVC  drainage pipe diameter was 2″. Cold water was flowing into the drainage hole from a water supply to protect the PVC drainage pipe from melting.

- – - -

Finally I would like to share some photos from the demo (click on the images for larger view).

The reactor with the metal casing open. Photo: Mats Lewan

The reactor with the metal casing open. Photo: Mats Lewan

The reactor chamber of a reactor not in use. Photo: Mats Lewan

The reactor chamber of a reactor not in use. Photo: Mats Lewan

Another reactor, not used during the demo. Photo: Mats Lewan

Another reactor, not used during the demo. Photo: Mats Lewan

Tubes supplying hydrogen or argon gas to the reactor. Photo: Mats Lewan

Tubes supplying hydrogen or argon gas to the reactor. Photo: Mats Lewan

The insulated outlet tube from the reactor at the thermocouple measuring outlet temperature. The valve with the red handle is open, letting water/steam flow upwards and eventually into the sink. Photo: Mats Lewan

The insulated outlet tube from the reactor at the thermocouple measuring outlet temperature. The valve with the red handle is open, letting water/steam flow upwards and eventually into the sink. Photo: Mats Lewan

The high voltage generator. Photo: Mats Lewan

The high voltage generator. Photo: Mats Lewan

Specification label on the high voltage generator. Photo: Mats Lewan

Specification label on the high voltage generator. Photo: Mats Lewan

The vacuum pump used to degas the reactor between control run and active run. Photo: Mats Lewan

The vacuum pump used to degas the reactor between control run and active run. Photo: Mats Lewan

Spark plug inserted into reactor not in use. According to John Hadjichristos an ordinary spark plug, as opposed to an "other type of heavily modified spark plugs that we use in our plasma subsystem configuration." Photo: Mats Lewan

Spark plug inserted into reactor not in use. According to John Hadjichristos an ordinary spark plug, as opposed to an “other type of heavily modified spark plugs that we use in our plasma subsystem configuration.” Photo: Mats Lewan

Thinking, fast and slow, pattern recognition and super intelligence

Thinking_250pxThis summer’s reading has been Thinking, Fast and Slow by the Israeli-American psychologist and winner of Nobel Memorial Prize in Economic Sciences, Daniel Kahneman.

Great reading (although a little heavy to read from start to end in a short time).

The book describes the brain’s two ways of thinking — the faster, more intuitive and emotional ‘System 1′, as Kahneman calls it, which incessantly interprets impressions and makes associations, and the slower, consciously controllable and more rational ‘System 2′ which resolves problems, allows us to focus and to control ourselves, but that also requires a significant effort when activated.

The message of the book is that we tend to rely too much on human judgment, frequently based on intuitive conclusions served by System 1 — conclusions that System 2 often lazily accepts, instead of activating itself to assess them rationally.

For me, however, the book brought a couple of other thoughts. One was that System 1′s constant search for patterns and recognition is reminiscent of an idea of what ​​the basic algorithm of the brain’s way of working could be.Author and entrepreneur Ray Kurzweil has suggested that pattern recognition is what the brain is in fact engaged in, at levels from dots and dashes all the way to abstract phenomena like irony.

Kurzweil presents this idea in his book How to Create a Mind (2012) and he calls it the Pattern Recognition Theory of Mind (read more in this earlier post where I also note that Kurzweil is now Director of Engineering at Google, working with machine learning).

I was also struck by the idea that the image of the two systems could help when trying to imagine what super-intelligence might be like (which I discussed in this post). Supposing that machines will one day, not too distant, achieve human intelligence and consciousness, which I believe is reasonable (by the way — have a look at this research in which an AI system was IQ-tested and judged to have the intelligence of a four-year-old), then they will soon after become super intelligent, although that might be difficult to comprehend.

But try to imagine the associative power of System 1, constantly tapping into years of experience of different patterns, phenomena, objects, behaviors, emotions etc, and then imagine having the same kind of system tapping into much larger quantity of stored data, performing associations at significantly higher speed.

Then imagine a System 2, being able to assess the input from such a System 1 on steroids, being able for example to perform multi dimensional analysis — i.e. the same kind of classic sorting we do when we picture a phenomenon in four quadrants with two axes defining two different variables (like this one), although a super intelligent System 2 would do the same thing with a thousand variables.

Such an intelligence would probably have forbearance with our limited capacity to see the whole picture, but hopefully it would also have sympathy for our capacity to enjoy life with our limitations.


Get every new post delivered to your Inbox.

Join 904 other followers