In 2013, Google acquired eight companies specializing in robotics, and many have asked what Google will do with all those robots.
The eighth company wa
s Boston Dynamics, which through funding by DARPA has developed a couple of high-profile animal-like robots and the two-legged humanoid Atlas.
A week after that acquisition, Google became the world’s robot king when one of the companies previously bought won the final trials of the DARPA Robotics Challenge–a competition where robots are expected to manage tasks like climbing a ladder, punch a hole through a wall, drive a car and close valves. Second was a team that used the Atlas robot.
Google’s interest in robots should be seen in the broader context of its other ventures–everything from the digital glasses Google Glass and driverless vehicles to its established services–web search, maps, Street View, videos, the Android OS, web-based office applications and Gmail. Plus the latest big acquisition at $3.2 billion–Nest Labs, that develops the self-learning thermostat Nest and the connected smoke detector Protect.
The common denominator is data. Large amounts of data about what users are doing and thinking, about where they go and what the world looks like.
It fits the robot venture. As robots are becoming more capable they will perform increasingly sophisticated tasks and gradually take over many jobs from humans. During their work, they will collect huge amounts of data, about everything , everywhere in the world.
It is not obvious that Google will have access to all this data. Nest for example, has made it clear that the company’s policy on privacy remains firm after the takeover, and that data from thermostats may only be used to ‘improve products and services’.
But Google has repeatedly demonstrated its ability to offer attractive free services where users willingly share their data in exchange for the service.
Added to this is Google’s focus on learning machines and advanced artificial intelligence — most recently through the acquisition of the British AI company Deep Mind for over $2 billion, and also through the recruitment of futurist and entrepreneur Ray Kurzweil as chief engineer last year (Ray Kurzweil’s latest book is called How to Create a Mind).
If it is possible to develop an artificial consciousness in a machine, one may ask how far such a consciousness reaches. One way to respond–which I touched in this post–is to relate to a human being that reaches as far as her body and its senses. An artificial consciousness would then by analogy be limited to the sensors it controls in order to collect data.
Google is then in a good position. And though I don’t believe that Google has any evil plans at all, this scares me far more than the surveillance in which NSA and other intelligence agencies are engaged, combined.
Interception and surveillance will never give nearly as much data about us as Google can get, and it can be regulated. What Google will do with all the data that we willingly share is something no-one else can control.
(This post was also published in Swedish in Ny Teknik).
I already outlined the ideas of author and entrepreneur Ray Kurzweil, currently Engineering Director at Google, on exponentially accelerating technological change. His ideas are based on what he calls the Law of Accelereting Returns — the fairly intuitive suggestion that whatever is developed somewhere in a system, increases the total speed of development in the whole system.
The counter intuitive result of this is an exponentially increasing pace, which on the other hand is supported by observations; at this moment the pace of development doubles about each decade, leading to a thousandfold increase in this century compared to the last.
I have also discussed the thoughts of Kevin Kelly described in his book What Technology Wants. Kelly suggests, i line with Kurzweil, that technological development is a natural extension of biological evolution, keeping up the exponential pace that can be observed all the way from single celled organisms (although you could discuss whether DNA actually has had the time to evolve on Earth).
I find also Kelly’s suggestion intuitive. If you consider spoken language as one of man’s first technological inventions, you could ask if it’s not so intimately linked to the human brain that it could be regarded as part of the evolution. Spoken language is a grey zone between evolution and technology that high lights the links between them and their dependence on each other — both having a similar nature if you see them as a whole and if you look beyond the molecules and atoms they are made out of.
This leads to a concept that I have been surprised to observe as being hardly mentioned before — The Survival of the Fittest Technology.
It’s the idea that technological inventions obey the same rules as evolutionary steps in nature. Only the most fit (best adapted, best conceived) inventions will reach the market and gain massive support and usage among people and thus survive and be subject to further development, refinement and combination with other technologies.
This idea is intimately linked to what the biologist and researcher Stuart Kaufmann calls the adjacent possible—that new inventions are based on fundamentals and skills already in place–a concept that the author Steven Johnson develops in the book Where Good Ideas Come From, The Natural History of Innovation (2010):
“The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.”
Some of the adjacent possibles are inherently strong and more fit than others. When you already have the telephone, the idea to make it cordless and then mobile is so natural and strong that it just cannot avoid being realized. Different details in the development of the mobile phones are equally exposed to the survival of the fittest, defining the path to a robust and useful technological solution.
What you could ask, and what has been discussed by several people, is whether there are multitudes if different paths that evolution and technology development could take, or if the adjacent possible and the survival of the fittest have so strong inherent patterns that there’s basically only one way with small variations. This would mean that if we would replay everything from the Big Bang, the result would be essentially the same.
Kevin Kelly also discusses along these lines. He suggests that there’s a third driving mechanism behind evolution, besides random changes/mutations and natural selection/survival of the fittest. The third vector is structure, inevitable patterns that form in complex systems due to e.g. physical laws and geometry.
He then proposes that technological development is based on a similar triad where the natural selection is replaced by human free will and choice.
I like this link between evolution and technology. But I believe that it’s the random change that is replaced by, or at least mixed with human free will and choice. Accidents and random changes happen, but the function of mutations in nature would largely correspond to human’s intentional design of technology, changing different aspects at will.
My point is, however, that natural selection is not replaced by human choice. It is as present in technological development as in biological evolution. Although the survival of the fittest technology is a result of human choice and free will, it’s a sum of many individuals’ choices, a collective phenomenon, that is not possible to control by any single mind.
And therefore Survival of the Fittest Technology appears to be what survival of the fittest is in nature–an invention/species being exposed to a complex and interacting environment where only the best conceived and best adapted thrive.
We just released a fresh issue of the Swedish forward looking digital magazine Next Magasin, for which I am the managing editor, this time focusing on cyborgs. The main feature reportage by journalist Siv Engelmark is a fascinating journey through the aspects of our use of technology to make humans into something more than humans.
Engelmark has been talking to cognition scientists, litterateur scientists, philosophers, pioneers and futurist, trying to find out what the consequences of human enhancement are and what people think of it.
Apart from the feature story there are several interesting pieces on subjects such as electronic blood, biomimetics with bumblebees, brain controlled vehicles, space buildings on Earth, sensor swarms, disruption of healthcare, synthetic biology, teleportation and more.
By the way — I forgot to post the release of issue number 2 back in June 2013. Journalist Peter Ottsjö wrote an eye opening feature story on virtual worlds which are not, as you may think, just some old rests of Second Life.
Instead, virtual worlds are today developing into a rich series of opportunities for both professionals and consumers, and they’re bound to take a larger part in our life than most people realize, bringing significant changes to our way of living.
On the front page you can see the Japanese mega star Hatsune Miku who is all virtual — a virtual synthesizer voice for which fans can write songs, being performed by a projection of the virtual artist on real arenas with thousands of people watching the performance.
In a decade or two, the physical world will just be a sub set of our lives.
Ever since Martin Fleischmann and Stanley Pons presented their startling results in 1989, claiming that they had discovered a process that generated anomalously high amount of thermal energy, possibly through nuclear fusion at room temperature, cold fusion has been rejected by the mainstream scientific community.
For anyone open to believe the contrary, here are three good reasons (remember that cold fusion would be a clean, inexpensive and virtually inexhaustible energy source that would use a gram of hydrogen to run a car for a year):
1. Lessons from cold fusion archives and from history.
A comprehensive outlook on the field of cold fusion, including references to papers with specific instructions for anyone who would like to reproduce the Fleischmann and Pons effect (explaining why it is so difficult). Presented at the cold fusion conference ICCF-18, 2013, by Jed Rothwell who runs lenr-canr.org — an online library with documents and papers regarding cold fusion.
2. The Enabling Criteria of Electrochemical Heat: Beyond Reasonable Doubt
A paper from 2008 by Dennis Cravens and Dennis Letts, indicating four criteria for reproducing the Fleischmann and Pons effect. Cravens and Letts had gone through 160 papers concerning generation of heat from the F&P effect, and found four criteria correlated to reports of successful experiments, whereas negative results could be traced to researchers not fulfilling one or more of those conditions.
3. A brass ball remaining four degrees warmer than another.
An elegantly designed experiment by Dennis Cravens, performed recently at NI Week 2013, where two brass balls were resting in a bed of aluminum beads at constant temperature. Yet, one of the brass balls, containing another kind of experimental set-up with similar materials as in Fleischmann’s and Pons’ experiment, remained four degrees warmer than the bed and the other ball, with no external energy input. This is not a replication of the F&P effect, but indicates that the process can be implemented in different forms (gas loaded instead of electrolysis).
Please add a comment if you have any other comprehensive and convincing document to suggest, regarding cold fusion or LENR (Low Energy Nuclear Reactions).
(This update comes a little bit late, I apologize for that).
Defkalion’s reactor demo in Milan in July has been discussed extensively. A series of concerns have been raised, among them for the flow measurement not being accurate and for the flow of steam output into the sink being weaker than what could be expected.
Regarding the steam flow I already said that I regret not having opened the valve leading straight down towards the floor (the one we used when calibrating the water flow) to get a visual observation of the steam flow. I have later understood that others have asked to do the same thing but that Defkalion declined, arguing that opening that valve would disturb the equilibrium in the system.
After the demo I sent a couple of follow up questions to Defkalion’s chief scientist, John Hadjichristos, and I would like to share his answers here.
Mats: A Faraday cage only shields from electric fields, not magnetic fields. Can you discuss further how the strong magnetic fields you mentioned, reaching 1.6 Tesla, were shielded?
Hadjichristos: First of all we wish to clarify that the reported magnetic anomalies values relate to peak measurements. Shielding of such “noise” is done using mu metal materials and solenoids during tests having the declared objectives as in the protocol submitted to ICCF18. I apologize for the technically not correct use of the terms “cage” or “Faraday cage” as used in our internal lab jargon.
From a reader: At the time from 21:10 till 21:33 the output temp raised from 143°C to 166°C. But inner reactor temp was all the time constant at 355°C-358°C and coolant flow was 0,57 – 0.59 liter/min also constant. Is there any explanation for this phenomenon?
Hadjichristos: When coolant is in dry steam condition, flow is not constant. A pressure barrier within the coil surrounding the reactor creates flow flactuations that result to such ‘strange’ thermal behavior of coolant during the aforsaid period, srongly related also with stored energy in reactor’s metals. This can be easily explained noting also:
As I explained live during the demo, the flow measurement algorithm in our Labview software uses the slope (first derivative) of the plot of the reported fn pulses from the flow meter and not the n/(1/f1+1/f2+…+1/fn) or the more common in use (f1+f2+…+fn)/n methods, as the later are very sensitive leeding to huge systematic errors and wrong calorimetry results due to such fluctuations when occurred. The consequence ”cost” of the method we use is the delay on the reported values on screen, which obviously does not influence the total energy output calculations with any “noise” as all fn values are used, whilst all thermometry measurements are “quicker” reporting “on screen”. All such 3 flow calculation methods from the flow meter’s signals give indentical instant flow measurement results only when f1=f2=…=fn aka when no steam pressure blocks water to flow from the grid smoothly.
Thanks to your reader bringing up this, not very much commented/analyzed in blogs, issue on flow wrong algorithms in use in similar calorimetry configurations.
Mats: Could you tell me which other external persons/validators were supposed to come and why they didn’t come?
Mats: The sink where the steam was output, was it a normal sink with an open hole in the bottom leading to the ordinary drainage network, or was there any active venting, e.g. a fan, drawing gas down the sink? Could you also tell me the inner diameter of the steam outlet tube?
Hadjichristos: There was not any active venting to or in the drainage. The output pipe driving the steam to the drainage network was a 1/2″ diameter cooper pipe (not thermal insulated after the Tout thermocouple) whilst the PVC drainage pipe diameter was 2″. Cold water was flowing into the drainage hole from a water supply to protect the PVC drainage pipe from melting.
- – - -
Finally I would like to share some photos from the demo (click on the images for larger view).
Great reading (although a little heavy to read from start to end in a short time).
The book describes the brain’s two ways of thinking — the faster, more intuitive and emotional ‘System 1′, as Kahneman calls it, which incessantly interprets impressions and makes associations, and the slower, consciously controllable and more rational ‘System 2′ which resolves problems, allows us to focus and to control ourselves, but that also requires a significant effort when activated.
The message of the book is that we tend to rely too much on human judgment, frequently based on intuitive conclusions served by System 1 — conclusions that System 2 often lazily accepts, instead of activating itself to assess them rationally.
For me, however, the book brought a couple of other thoughts. One was that System 1′s constant search for patterns and recognition is reminiscent of an idea of what the basic algorithm of the brain’s way of working could be.Author and entrepreneur Ray Kurzweil has suggested that pattern recognition is what the brain is in fact engaged in, at levels from dots and dashes all the way to abstract phenomena like irony.
Kurzweil presents this idea in his book How to Create a Mind (2012) and he calls it the Pattern Recognition Theory of Mind (read more in this earlier post where I also note that Kurzweil is now Director of Engineering at Google, working with machine learning).
I was also struck by the idea that the image of the two systems could help when trying to imagine what super-intelligence might be like (which I discussed in this post). Supposing that machines will one day, not too distant, achieve human intelligence and consciousness, which I believe is reasonable (by the way — have a look at this research in which an AI system was IQ-tested and judged to have the intelligence of a four-year-old), then they will soon after become super intelligent, although that might be difficult to comprehend.
But try to imagine the associative power of System 1, constantly tapping into years of experience of different patterns, phenomena, objects, behaviors, emotions etc, and then imagine having the same kind of system tapping into much larger quantity of stored data, performing associations at significantly higher speed.
Then imagine a System 2, being able to assess the input from such a System 1 on steroids, being able for example to perform multi dimensional analysis — i.e. the same kind of classic sorting we do when we picture a phenomenon in four quadrants with two axes defining two different variables (like this one), although a super intelligent System 2 would do the same thing with a thousand variables.
Such an intelligence would probably have forbearance with our limited capacity to see the whole picture, but hopefully it would also have sympathy for our capacity to enjoy life with our limitations.
Yesterday I participated as an observer at the Greek-Canadian company Defkalion’s demo of its LENR based energy device Hyperion in Milan, Italy. The device is just like Andrea Rossi’s E-Cat, loaded with small amounts of nickel powder and pressurized with hydrogen, and supposedly produces net thermal energy through a hitherto unknown process that seems to be nuclear (LENR stands for Low Energy Nuclear Reactions).
Defkalion used to be a commercial partner to Rossi until an agreement was cancelled in August 2011 (read more at Ny Teknik here).
The demo was the first public (apart from a short pre-run on Monday) from Defkalion that since 2011 claims to have developed its own core technology.
My general impression is that it’s a process that is similar to what I have seen at Rossi’s demos. If you believe the values presented, it produces thermal power in the order of kilowatts from a very small amount of fuel. Although Defkalion has a somewhat different method to control the reaction, it still seems be a delicate thing to get it to work well without stopping or run away.
I believe we will get some reliable answers on the validity of Defkalion’s and/or Rossi’s technology during this year.
At Defkalion’s demo I was asked to verify calibrations and measurements just before the start of the demo, although I had not been prepared for this. Yet, here are a few more detailed considerations:
- the demo was set up in the lab of Defkalion’s Europe office, and thus under complete control by Defkalion. All instruments and sensors were Defkalion’s.
- as far as I could verify there were no hidden wires or energy sources. I cannot completely exclude it, but my general impression was that of a fairly transparent implementation. I was offered to check anything except inside the reactor, also to cut cables (although I never did this).
- all values were collected with National Instruments’ Lab View.
- input electric power was also measured by me with a Fluke True RMS Clamp Ampere meter (Defkalion’s) and a standard Voltage meter (my own). Electric energy was input through two variacs — one for seven electric resistors connected in parallel inside the reactor, and one for a high voltage generator, feeding sparks through two modified spark plugs. I measured both before and after the variacs.
- output thermal power was calculated through water flow and delta T of the water cooling the reactor ((Tout – Tin)*4,18*water flow in grams/second).
- a control run was performed with argon instead of hydrogen, which showed no excess power. Calibration of the water flow was done and controlled by me during the control run and showed that the real water flow was a few percent greater than what was showed in Lab View.
- an issue was detected as Lab View showed an input electric energy to the high voltage generator of between 200 and 300 watts, whereas I measured an input electric energy to the HV generator of between 1,0 and 1,3 kW. We never found out what this issue depended on.
- in the active run with hydrogen the output thermal power reached about 5,5 kW whereas the total input power was about 2,7 kW, taking into account the higher value of the power fed into the HV generator.
- Defkalion had expected to reach a higher output power but admitted that it was a problem degassing the reactor only an hour after the argon run. The process is supposedly very sensitive to small amounts of other gases than hydrogen.
- no consideration was taken to vaporization enthalpy. Yet the temperature at the output reached over 160 degrees Celsius with and open ended output tube, thus basically at atmospheric pressure. The output was led down into a sink. Initially water was pouring down, but at high temperatures there was no water dropping at all. If all the water was vaporized, the output thermal power would have been above 27 kW.
- the hydrogen canister seemed to be a standard commercial canister containing ordinary hydrogen — no deuterium.
- I could detect no DC voltage or current at any point. The Fluke clamp meter was capable of measuring DC.
UPDATE: I forgot to say that according to CTO John Hadjichristos there are HUGE magnetic fields inside the reactor as a result of the reaction, in the order of 1 Tesla if I remember right, possibly due to extremely strong currents over very short distances. Hadjichristos says the field is shielded by double Faraday cages, probably the reactor body and the external metal cover outside the heat insulation.
UPDATE 2: Since I have been asked if I can exclude that hydrogen was fed into the reactor during the experiment I have to admit that I didn’t check that the valves were closed. Bear with me.
And some statements/claims from Alex Xanthoulis, president of Defkalion:
- Collaboration is on going with six companies for development of particular applications. Several of these companies are among the 10 major companies in the world. Concerned applications are: UAVs, computers, water boilers, electric power generation, green houses, ship propulsion (managed by Defkalion), automobile, water desalinization/purification (non profit organization) and big turbines.
- Agreements for licensing of manufacturing of a consumer product — the Hyperion — is signed with companies in Italy, France, Greece (Defkalion 50%), Canada and South Africa. 1,300 companies in about 78 countries are interested. The license price has previously been EUR 40.5 million.
- Defkalion has no external investors so far. Principal owner is Alex Xanthoulis.