Skip to content

Covid-19 is possibly caused by an attack on the body’s oxygenation system

There are some interesting observations on the Covid-19 disease revealing that it doesn’t appear to be normal pneumonia or ARDS (Acute respiratory distress syndrome), but a failure of the body’s oxygenation system, due to an attack by the virus on the hemoglobin, transporting oxygen and CO2 through the blood. Check the video below with observations made by a ICU physician in New York.
What is being observed is that the lungs of the patient seem to be mechanically ok, and that the problem manifests itself rather as acute high altitude sickness.
Here’s a scientific paper describing the situation from a biochemical point of view, hinting at a failure of the body’s oxygenation system when proteins of the virus attacks the hemoglobin and its function of carrying oxygen and CO2.
The conclusion is that ventilators, which are used to help lungs with mechanical work if the muscles are too weak, might not be very effective for treating Covid-19, or might even do harm, unless we find the cures that repair the oxygenation process, or stop the virus attack on this system.
Another hint is this AI-based study identifying three top risk factors for developing a severe illness in Covid-19 (and it is not age, sex or the things we would expect, but hemoglobin turns up again):
“A mildly elevated alanine aminotransferase (ALT) (a liver enzyme), the presence of myalgias (body aches), and an elevated hemoglobin (red blood cells), in this order, are the clinical features, on presentation, that are the most predictive. The predictive models that learned from historical data of patients from these two hospitals achieved 70% to 80% accuracy in predicting severe cases.”
If this hypothesis would turn out to be correct, there should be a possibility to cure the Covid-19 disease without ICU and save thousands of lives, before we will be able to develop a vaccine about 18-24 months from now.

Rough forecasts for Covid-19 in 12 countries (regular updates)

UPDATE May 11: All 12 countries in the forecast are now at about 5 deaths per million per day or lower, Sweden being highest and most behind. In the beginning of June, all countries will probably be at 1 death per million per day or lower. What will happen when lockdowns are being lifted is unclear though. Effects on severe cases should be seen within 2 weeks and on deaths within 3 weeks. 

On May 11, many European countries started to remove restrictions. After 2 months in lockdown, as in Italy for example, people seem to gather more than would be recommended. In the next 2-3 weeks we will know what effects this will have. A second lockdown cannot be excluded if numbers start increasing again.

Obviously it is important that we keep following the advice for limiting the spread of the disease. It’s not over at all, but we can do this together.

The curve for China is an estimated distribution of additional deaths reported on April 17 (see update May 4 below).

The last six days of daily deaths in Sweden is still being calculated in order to compensate for delayed reporting of deaths in Sweden. Accumulated deaths on May 11 was estimated to 3536, while the officially reported number was 3256. Daily deaths on May 11 was estimated at about 60.

Here are the latest charts for May 11 (click on the images):



This blog post, which will be updated regularly, contains a rough forecast for the Covid-19 pandemic situation in 12 countries, with three data points:

  1. Daily growth of total deaths in percent (thin line to the left). Dashed graph is the forecast. UPDATE April 26: This graph has been eliminated since it doesn’t add much information.
  2. Number of deaths per day per 10 million inhabitants, average over 5 days (fat curve to the right). Dashed graph is the forecast.
  3. Estimate of total deaths per 1 million inhabitants, plus total number of deaths (numbers to the right).

NOTE: The forecasts are based on math only—I have no knowledge of epidemiology—and are very uncertain (the model is particularly sensible for sudden changes in growth percentage before reaching the peak of daily deaths).

The basis for the forecasts are a few real observations, with the model described further down in this post. The essential observation is that the growth of total deaths in percent starts high in all countries (40-50 percent) and then declines day by day (partly for mathematical reasons, partly for a decline in the growth of daily deaths). From this observation, the forecasts are based on the time series of the daily growth percentage in China and in some cases also Spain. See below.

The take-away of the observations is that it is essential to push down the daily growth of total deaths (by pushing down the spread of the disease). Note that a small difference in percentage makes a huge difference in the number of deaths over time. If we have 100 dead today and 20 percent growth, we will have 4,600 dead after three weeks. With 30 percent daily growth we will end up with 25,000 deaths! Which measures are most effective should be answered by epidemiologists.


This is the model for the forecasts:

The estimates in the graphs above are based on a few simple observations on the number of deaths from Covid-19. The number of deaths is used since this appears to be the most reliable data point. The number of “confirmed cases” is considered by many to be largely underestimated, due to many cases being asymptomatic, and the number also correlates mainly with the number of tests made, which can be seen in data from Italy.

The number of deaths is also more relevant to the healthcare system—it seems to be directly proportional to the number of persons in ICU (at least in Italy, by a factor of 1 to 5 or 6) which is the critical number for the hospitals. However, also the number of deaths may be inaccurate since it is unclear how many deaths related to Covid-19 are not registered, e.g. for people dying in their homes.

These are the observations:

  1. In all countries, the total number of deaths increases day by day by a percentage (exponential growth).
  2. In most countries, this percentage starts high (40-50 percent) and after about ten bumpy days it declines day by day, partly for mathematical reasons, partly due to a decline in growth of daily deaths (over time, the growth of total deaths will approach the growth of daily deaths asymptotically). Here is how this looked in China (click on the image):

And here are the percentage curves for 12 countries, starting from the day in each country for one death per one million inhabitants. Note how all countries eventually seem to follow the same slope and path for this curve:

The model uses this pattern, copying the time series of China for the daily growth in percent. 

E.g. if a country one day has a growth of total deaths of 17 percent, the following percentages below 17 percent from the China time series are used for the forecast. For one percentage at a time, the number of total deaths the next day can be calculated (as an estimate), and from this also the deaths per day for the next day. 



  • The number of daily deaths used in the graphs and in the calculations is an average over five days. 
  • For the calculation of deaths per inhabitants in China, the population of Wuhan (11 million) has been used.
  • France had a very long time from the first death to the second death. For easier comparison, the second death in France has been used as a starting point.
  • Countries covered in this post: China, Italy, Spain, Sweden, Norway, Denmark, Finland, UK, France, Netherlands, Germany, and USA.

Data Sources:,,,,


Earlier updates:

UPDATE May 4: All the 12 countries still seem to have passed the peak of daily deaths, although the situation is unclear for countries with the lowest number of total deaths per 1 million, e.g. Finland and Denmark. The long term number of deaths per 1 million in the USA still seems to remain significantly lower than in Spain, Italy, France and the UK. The graph for daily deaths in China have been updated, following an assumed distribution of the 50 percent increase in deaths reported on April 17. The number of daily deaths in Sweden for the last seven days is still being calculated, with total deaths estimated to 3051.

It is not clear whether the countries with low number of total deaths such as Finland and Denmark have passed the peak of daily deaths or not. On the other hand, Norway and maybe also Germany seem more clearly passed the peak. What will happen when these countries lift their restrictions will be important to follow since immunity might be lower.

The assumed distribution of daily deaths in China, including the 50 percent increase in total deaths reported on April 17 without information on when these deaths occurred, is close to the graphs for Italy and France, although with a steeper decline of daily deaths. The steeper decline agrees with the low number of daily deaths—mostly zero, occasionally one—reported lately by China.

The number of daily deaths in Sweden is being calculated to compensate for delayed reporting of deaths in Sweden. The total number of deaths on May 4 was estimated to 3051, almost 300 more than the officially reported number of 2769. The number of daily deaths during the last week was estimated to between 65 and 73.

The calculation is based on an observed correlation between daily deaths and number of cases in ICU.

Here are the latest charts for May 4 (click on the images):




UPDATE April 26: All 12 countries still seem to have passed the peak of daily deaths. Sweden has had a slight increase but seems to be at the end of the peak. Finland has had a strong peak but it seems related to delayed reporting. France, UK, Netherlands, Germany, and the USA all follow the forecast fairly exactly, although the peak of daily deaths for USA has been extended over the last days. Generally, all countries seem to see a slower decline of daily deaths than China. 

• It has been clear for a few days that all of the 11 countries, particularly those that have clearly passed the peak of daily deaths, have a slower decline in daily deaths than China (Wuhan) had. I can clearly see this since the forecast model is built on the daily growth factors of total deaths in China.

On the other hand, I have not been able to add the 50 percent increase in total deaths for China reported on April 17 since there is no information about when these deaths have occurred.

Adding the newly reported deaths over almost the whole time series, with an emphasise over the peak and after the peak of daily deaths, makes the decline in daily deaths for China more in line with European countries. But it would also mean that China still should see a few deaths per day, and that is not what China is reporting.

Either China had a higher peak and a much faster decline of daily deaths than European countries, maybe due to its very severe lockdown, or the recent reports of zero daily deaths in China is incorrect.

• The numbers for Sweden for the last six days are still being calculated since reporting of deaths in Sweden keeps being significantly delayed.

• Large peaks for daily deaths in Finland on April 21 and 23 are probably due to delayed reporting and have been slightly distributed in time.

• I have now eliminated the graphs for daily growth of total deaths since they don’t make much sense in the diagram any longer. I have also extended the x-axes (days from 1st death) in the diagrams.

See the latest charts for April 26 below.

For reference, here’s is my forecast from March 26, one month ago. It is interesting to note how close the forecasts for Italy and Sweden has been to the real curves, though you can clearly see the slower decline than expected in Italy. In contrast, the forecast for Spain was too high due to a very fast increase in daily deaths initially in Spain.

Here are the other most recent charts from April 26 (click on the images):




UPDATE April 19: All 12 countries seem to have passed the peak of daily deaths, including USA. Italy sees daily deaths declining again efter a few days at a constant level. The last six days of daily deaths for Sweden are still being calculated in order to compensate for delayed death reports in Sweden. Total deaths for Sweden on April 19 was calculated to 1734, and for April 20 to 1787, about 200 above the number officially reported. 

The number of deaths in Sweden is calculated from the ratio of cases in ICU to daily deaths, correlating this value with Italy. The calculation is compensated for this ratio being on average 26 percent higher in Sweden than in Italy (see update for April 12 below).

Note that the US seems to be headed for 176 deaths per 1 million inhabitants, which is significantly lower than in China, Italy, Spain, France, UK, Netherlands, and Sweden. Some news outlets report high numbers of deaths in the USA but don’t seem to take into account the large population.

Also note that passing the peak of daily deaths means that the country has passed the peak of new daily cases 2-3 weeks earlier. In order to get to zero, however, we still need to be careful to slow down spread of the disease—washing hands, keeping distance, staying at home when we are sick, not meeting elderly, and avoiding social meeting.

China: On April 17, the Hubei Province issued a “Notice on the Correction of the Number of New Coronary Pneumonia Cases Diagnosed and the Number of Diagnosed Deaths in Wuhan” in which it reported 1,290 additional deaths that had not been previously counted and reported, bringing the total number of deaths in Wuhan from 2,579 to 3,869, an increase of 50%. These number have not been added to the model, assuming that the added number of deaths are fairly evenly distributed of the time series and thus essentially not changing the daily growth percentage in China, which is used as a basis for the forecast for other countries.

See the latest charts for April 19 below.




UPDATE April 16: The forecast for Sweden is still difficult to make due to delayed death reports. I still use an alternative method for estimating deaths the last six days (se update April 12 below), but the forecast is a bit lower now and Sweden still seems to have passed the peak of daily deaths. France and the Netherlands struggle to get passed the peak. Italy’s daily deaths don’t decline as fast as in China. The forecast for USA is significantly increased due to rising daily deaths. 

Charts for April 13, 14 and 15 are added below without comments.

The latest charts for April 16:




UPDATE April 15 without comments:


UPDATE April 14 without comments:


UPDATE April 13 without comments:


UPDATE April 12: The forecast for Sweden is significantly increased, putting Sweden at the peak of daily deaths today. A different model is used for Sweden in order to compensate for an obvious delay in reported deaths for the last six days, and it is obviously highly uncertain. Several other countries have a lowered forecast which may be due to delayed reports of deaths during Easter (see charts below).

Explanation of the new estimate for Sweden:

In the last few days there have been several indications that the data for deaths in Sweden are not complete.

  1. The shape of the curve showing daily deaths has been more pointed compared to all other countries.
  2. The decline of the growth of total deaths has been quicker than in any other country.
  3. The growth percent of total deaths has fallen to a value lower than in any other of the twelve countries except China and Italy which are far ahead in time.
  4. The ratio of cases in ICU to daily deaths is suddenly much higher than in Italy, where the value has been fairly stable along a certain curve, and also much higher than in Sweden only a week ago.

This all indicates that reported deaths in Sweden for the last days are below the real value. Since reported deaths have been delayed regularly during weekends in Sweden, and since Easter falls this weekend, the delay is expected, also by the health agency Folkhälsomyndigheten which expects real data on Tuesday April 14 or later.

For this reason, I have used a different model to calculate an estimate for daily deaths in Sweden for the last six days. The estimate is based on the ratio of cases in ICU to daily deaths. In Italy where both data series are available, this ratio started at about 12, reached its minimum of about 4.5 at the point where daily deaths reached its maximum, after which it started increasing again. The peak of cases in ICU was reached about one week later.

Comparing these values with Sweden, the cases in ICU seems to reach its peak about one week from now, on April 19, and the ratio of cases in ICU to daily deaths appears to reach a minimum which is slightly higher than in Italy, about 5.5. Using these values, it is possible to estimate the number of daily deaths from the cases in ICU. The number of daily deaths appears to reach its peak today, April 12, at 92 (the graph is slightly lower, showing an average over five days). This forecast is obviously highly uncertain, but it fits better with the other countries.

The lowered forecast for several other countries—in particular France and UK—my also be due to delayed reports of deaths during Easter.




UPDATE April 11: The forecasts still seem stable. All 12 countries, except USA, seem to be at the peak of daily deaths or having passed the peak. The model indicate that USA could possibly reach the peak within a few days, even though this might not seem we would expect. 

The forecast for Sweden also seems stable but is somewhat uncertain since the time series from the health agency, Folkhälsomyndigheten, is updated daily with several changes in data points more than a week ago, and with the value of the last day often too low due to delayed reports from local health regions.

The number ofta daily deaths in Sweden is probably around 65-80 at the moment. Tomorrow I will try to adjust the forecast based on the number of people in ICU, comparing with Italy that has much more data available.




UPDATE April 10: Situation relatively stable in all 12 countries (see charts below). 



UPDATE April 9: Sweden still seems to have passed the peak of daily deaths. In the UK, daily deaths keeps increasing, pushing the peak a few days ahead and raising the forecast for total deaths. Also for the US, the forecast keeps increasing. 

This highlights that the forecast from this model is highly uncertain and sensible for small changes in the growth of the number of total deaths, before the peak of daily deaths is passed.


UPDATE April 8: Sweden seems to have passed to peak of daily deaths. Daily deaths in USA and Germany keep increasing.

The forecast for Sweden depends much on how the data is reported. The Swedish health agency Folkhälsomyndigheten has reported a delay in registering of deaths. With the numbers of daily death redistributed over the passed weeks by Folkhälsomyndigheten, the forecast for Sweden has changed significantly to a lower level.




UPDATE April 7: Among the 12 countries being covered here, most now seem to have passed the peak of daily deaths, and the rest being a few days from the peak. 


UPDATE April 2: (see this blog post: Update on Covid-19: Sweden remains below Italy, Spain’s forecast greatly improved). At this point, the forecast for both Sweden, Italy, and Spain was close to what we can see in more recent updates as of April 19, i.e. 17 days later:


UPDATE March 28: This forecast showed a significantly lower curve for Sweden. The reason was that death reports in Sweden started to become delayed, and the reported number of deaths at this time was much lower than real numbers.


UPDATE March 26: This was the first update that I published (see this blog post: Covid-19: We have to push down daily growth below 10 percent). Comparing with more recent updates, as e.g. April 19—i.e. over three weeks later—you can see that the forecast for Sweden was fairly accurate, for Italy it was a little too low, and for Spain too high. Still it gives a hint of the model being quite useful:


Update on Covid-19: Sweden remains below Italy, Spain’s forecast greatly improved


In an earlier post I tried to give a picture of the situation of the Covid-19 pandemic in Italy, Spain and Sweden, on March 26. Here’s an update from April 1st.

Note that I’m not an epidemiologist, but I know mathematics and I’m showing what the data might tell us.

The forecast is based on a simple observation: In ALL countries, the TOTAL number of deaths grows by a certain percentage each day, just like interest on interest on the money in your bank account (exponential growth). And in ALL countries, the daily growth starts high—about 40-50%—and after some initial bumps it decreases day by day (while the total number of deaths obviously continues to increase). From this observation, you can count ahead, using the time series of percentages from China.

For Italy, this has proven to be a good match. The number of deaths per day peaked 34 days after the first death, exactly as I expected. The final total number of deaths is likely to be about 20,000 or about 340 per one million inhabitants, not far away from China (Wuhan) with about 300 deaths per one million.

Looking at Sweden, the forecast indicates that we are still on a lower curve than Italy even though we haven’t yet chosen to implement a full lockdown. It is noteworthy Sweden’s curve is higher than Norway’s, while Denmark which implemented severe restrictions on March 11 has now the same increase in total deaths as Sweden, and the UK, in lockdown since March 23, has a significantly higher increase in deaths than Sweden, even though it should be a week ahead with a lower increase by now.

The forecast for Sweden, however, is HIGHLY uncertain, as you will soon see from the case of Spain below. Until the number of daily deaths has reached its peak as in Italy, things may still change very much from one day to another.

But if the situation in Sweden continues like this, we might possibly be able to keep the virus spread under a certain control, as I noted in my last post, through an early implementation of a number of recommendations—wash your hands, stay at home if you have symptoms, work from home if you can, avoid meeting people, avoid visiting the elderly. Plus a series of favorable conditions in Sweden when it comes to limiting infections—less spontaneous social life, less daily integration between generations, and a tendency to do what the authorities ask us to do for a common good (which might all be seen as not so desirable from other points of view).

On top of this, Sweden is now tightening the recommendations, emphasizing the importance of keeping distance to people, both indoors and outdoors etc.

For Spain the situation looked far worse than in most countries, but it has improved significantly in the last few days, and the country seems to be reaching the peak of daily deaths right now. Why? Because the percentage I mentioned above has dropped significantly faster in recent days than in Italy and China. Interesting!

You might think that it could be explained by the Spanish lockdown being more efficient than the Italian. But I believe no-one would say that it has been more efficient than in China!

Thus, the steeper slowdown of the increase of deaths in Spain must be due to something else.

One possible explanation is herd immunity (due to lots of people being infected without symptoms, maybe 100x more than the confirmed number of cases) which in that case is slowing down the spread of the virus (and consequently the increase of deaths) more efficiently than the lockdown. A rapid early spread of infection, as happened in Spain, would then lead to the slowing effect from flock immunity being achieved earlier and stronger.

The topic of immunity is important since it will decide whether we will be able to safely lift restrictions without new clusters being formed when the first wave of infection has petered out at the end of May, or if we will have to go on with regular restrictions until we have a vaccine, about two years from now.

We will know this for certain only when we start using antibody tests, which are now being developed, with plans for large scale testing starting in some countries in a few weeks from now.

The tests we are using now only indicate whether there is an ongoing infection or not, while antibody tests will let us know whether an individual has had the infection or not.

But even if there would be immunity, we still don’t know how long immunity lasts, and if a second infection could actually hit those who have been infected before even harder, as is the case with certain viruses such as Dengue.

Summing this up:

  1. There is still a lot we don’t know about the Covid-19 pandemic, apparently not even how effective lockdowns are. But we know that we have to slow down the disease enough to let the health care system handle all sick people. Be safe and respect all recommendations!
  2. From the topic of immunity, we could possibly conclude that very severe lockdowns at an early stage in a country might actually put the inhabitants in a worse situation from a health point of view (apart from the economic costs and the psychological strain on people) when the second wave of the virus arrives. Many expect the second wave to arrive after summer. The second wave of the Spanish flu, for example, was more deadly than the first, and countries severely infected by the first wave appear to have been less devastated by the second.
  3. From China (Wuhan), Italy, and maybe also Spain if the steep slowing continues to improve the forecast, there is some indication that the total number of deaths seems to gravitate around 300-350 per one million inhabitants.
  4. We can also expect that in about two weeks from now—in mid-April—many countries in Europe will reach the peak of deaths per day, and at that point it will be a little easier to make forecasts.


Note: The number of deaths per day in the graph is an average over five days. In Sweden, the numbers of deaths over the last week has been distributed slightly (following exponential growth) since the Swedish health care regions have reported that the registered number of deaths had been delayed for some days and that the delayed numbers were accumulated on April 1st and 2nd.

Data Sources:,,,

Covid-19: We have to push down daily growth below 10 percent

How is Sweden doing in the Covid-19 pandemic? Will we manage without a lockdown, in contrast to many other countries?

The short answer: Possibly yes, IF we continue to slow down the spread of infection.

The long answer: Let’s have a look at what the curves tell us.

Firstly: The number of total confirmed cases does not say much. This number correlates mostly with the number of tests done in a country. Furthermore, it is well accepted that the true number of total infected may be much larger. The number of confirmed cases is the tip of an iceberg.

The number of deaths is a better measure. It is easier to count, although even this number may be too low due to many deaths resulting from the virus infection not being attributed to Covid-19.

The number of deaths is also more relevant to the healthcare system—it seems to be directly proportional to the number of persons in ICU (at least in Italy, by a factor of 1 to 5 or 6) which is the critical number for the hospitals.

Just like the spread of the disease, the number of deaths is growing exponentially, i.e. with an increase by a certain percentage every day. Like interest on interest in the financial world.

Note that a small difference in percentage makes a huge difference in the number of deaths over time. If we have 100 dead today and 20 percent growth, we will have 4,600 dead after three weeks. With 30 percent daily growth we will end up with 25,000 deaths!

Thus, it is very important to keep the growth down.

The graphs above is a mathematical simulation that I have done based on the number of deaths so far, for China, Italy, Spain, and Sweden. The dashed part of the curves is a very uncertain forecast based on the numbers from China.

All the curves initiate on the day of the first death is each country, for easier comparison.

The thin curves on the left represent the daily increase of deaths expressed in percent. They have been declining gradually in China and Italy since the introduction of the lockdown (marked with a round dot). The decline could be due to the lockdown, but possibly also to herd immunity (se remark below).

The fat curves to the right represent the number of deaths per day, per 10 million inhabitants (in the case of China, the population of Wuhan, 11 million, is used).

We can observe that the daily number of deaths stops to increase when the daily growth drops below ten percent. BUT this is only true if the growth of daily deaths continues to decline with the same pace as in China and in Italy (otherwise, a constant daily growth of ten percent obviously leads to increasing deaths per day).

We can also observe the steep increase in daily deaths in Spain, which depends on a slightly higher daily growth. Spain has been slower than China and Italy to push down the daily growth, and it is not yet below ten percent, meaning that the number of daily deaths is still growing fast (the top of the forecast curve is outside the diagram).

Daily growth of deaths is reasonably correlated with the spread of the disease, or in other words with the true number of infected cases.

Now, the question is, will the mild measures in Sweden be enough to push down the spread of the disease, also pushing down the daily growth of deaths below 10 percent?

A number of favourable aspects have already helped us to keep the growth down initially (as we can see on the thin blue growth curve for Sweden):

  1. Culturally, Swedish people have a tendency to do what they are asked to do for the common good, and we started relatively early with these behavioural recommendations, warned by Italy: Wash your hands, stay at home when you have symptoms, work from home if it is possible (and in Sweden it often is possible thanks to stable internet connections), avoid social contexts (some would claim that this comes naturally to us in Sweden…), protect the elderly by not meeting them etc.
  2. We are reasonably helped by younger demographics compared to Southern Europe, and it probably also helps us that there is less daily contact between generations, traditionally.

BUT will this be enough to keep pushing down the daily growth below ten percent? Or will we need a lockdown too? (We are about 19 days after Italy, so a lockdown at the same point in time would be on March 28, 19 days after Italy’s lockdown on March 9. That is tomorrow).

Only the epidemiologists know this.

What I have shown here is the maths describing the connection between daily growth in percent, and the culmination of daily deaths.

The simple conclusion is that we MUST slow down daily growth further.

We can do it together!


  1. For the forecast of the thin curves (the dashed part of the curves) I have used the values from China of daily growth of deaths in percent, starting with a value that is closest to the last observed (real) value of daily growth in each country. The forecast part of the fat curves is a calculated result of the development of the thin curves.
  2. The forecasts, especially for Sweden which is very early on the curve, are HIGHLY UNCERTAIN.
  3. An important uncertainty in Italy is what will happen when the disease starts spreading more in the southern parts of the country.
  4. In China and in Italy, the decline of the growth after the lockdown has been approximately 10 percent per day. NOTE, it is not percentage points per day, but percent per day. E.g. if the growth one day is 20 percent, the next day it will be 18 percent, and the next 16.2 percent and so on.
  5. The decline in growth in China and Italy after lockdown seems to be a result of the lockdown. BUT it could also depend on herd immunitya slowdown in the spread of the disease due to a large part of the population being infected (largely without symptoms) and being immune. Since no one knows the true number of infected, we cannot know this yet. Only when antibody detecting tests have been developed and used broadly we might get an answer. Or, if we can observe that the infection does not tend to start spreading again in the Hubei province (if we can trust data provided by China), we could conclude that there is a fairly high immunity. However, what indicates herd immunity as an explanation is the fact that the decline in growth starts immediately when the lockdown is implemented. If the lockdown would be the main factor for slowing the growth, we would see a delay of at least a week for a decline in daily growth of deaths to show up.
  6. Data sources:,,,

Here’s why meetings and events are becoming increasingly important

Many seem to agree that meetings and events are becoming increasingly important today. But exactly why are they becoming more important? Our gut feeling goes a long way to answer the question, but if we want to make the right decisions in a world that is changing at an accelerating pace, it may be good to understand the causes.

And the best way to get this understanding is to look at the big picture. That is my experience after many years of analyzing how technology changes our lives and our society.

Let’s start with digitalization, which is arguably the strongest driving force of change in our time. Everyone talks about it, yet it remains vague to many people.

The easiest way to understand digitalization is to look at other major technology shifts such as the wheel, the printing press, the steam engine and electricity. They all created previously unknown opportunities, which led to new ways of working, new business models and a changed everyday life. We simply adapted.

What makes digitalization special is that this technology shift alone creates so many new opportunities compared to previous major technology shifts. With digital technology, we’re not only able to manage and communicate information very efficiently, we can also copy everything that is digital at almost no cost. In addition, we can spread it to the world with one click, and to top it all, we do it from pocket to pocket!

Driving megatrends

This leads to a number of megatrends that everyone notices, although many might not reflect on how these trends form.

One example is the increased focus on customer experience, arising from the fact that the whole world is competing for your attention. As a result, we have raised the bar for what we regard as a high-quality service or product. Consequently, more effort than ever is needed in order to keep the customer’s attention and to offer something that is attractive.

Another example is the trend of purpose—that profitability is no longer enough to run a successful company. This trend is influence by that fact that it has become increasingly possible for individuals to make a difference globally—with a good idea, a little code and a few servers on the internet. A couple of decades ago this was almost unthinkable. And if we canmake a difference, then of course we want to.

There are many other examples—I usually talk about the 13 different faces of digitalization which together are driving a powerful change in our society and in our lives.

Everyone talks about AI

Added to this is artificial intelligence, AI, which everyone talks about too, but which might seem even more vague than digitalisation, and perhaps also a bit threatening.

The truth is that AI has made impressive progress in recent years, but that the technology is still nowhere near a human-like consciousness in a machine. Today’s AI systems are above all impressively good at learning and recognizing patterns in all forms, including complex patterns such as behavioral patterns.

However, in contrast to humans, AI has no understanding of cause and effect. For example, an AI system may be able to accurately assess the risk of rain based on the appearance of the clouds, but it has no idea that the rain is coming from the clouds.

In practice, AI is therefore good at predicting things, and at performing all kinds of routine tasks—not just manual work but also mental, within areas ranging from administration to research, but only in narrow niches.

Can AI take my job?

So, will AI steal your job? Not really. Rather, AI allows us to automate a variety of tasks that many people perceive as boring and time consuming. We’re getting a digital colleague that provides effective help in our work, meaning that we humans can focus on what AI is still very bad at doing. Mainly, this is about four areas:

  • creativity of all kinds
  • the ability to motivate and convince others
  • empathy and compassion
  • ability to see context and opportunities for collaboration.

Would you say that any of these aspects possibly relate to meetings with human people?

If we add the powerful development driven by the 13 faces of digitalization, it is clear that we find ourselves in a changing world where people are needed more than ever—to be creative about how we can adapt and seize new opportunities, and to focus on everything typically human that AI can’t do for us.

Develop human aspects

Meetings with people are then essential. That is where we learn from each other and exchange ideas. It’s where we develop our human aspects in relation to each other. And it’s where we shape the role of humans in a digitalized world.

Certainly, digitalization can make meetings better—helping us to share information more efficiently, letting people collaborate better with digital tools, managing administration in a more automated way, extending communication before and after the meeting, etc. In short, there are plenty of digital opportunities.

But what’s fundamental—in all industries—is not to be seduced by the possibilities of technology, throwing out the baby out with the bathwater. Or in other words, to forget all human experience and skills that sometimes express themselves as a gut feeling.

When it comes to meetings and events, such skills are absolutely essential. As you can see, the gut feeling is right—there are a number of reasons why meetings and events are more important than ever.

And in case anyone would be wondering, you can say you heard it from a digitization expert—who would love to tell you more.

Ten reasons that the future is happening right in front of you

Obviously we don’t know anything about the future. And bothering about the future is not what you’re doing all day. Yet, you want to be prepared.

So ask a futurist. But not even a futurist will know. So what could the futurist tell you? Well, to look around at what’s happening. Because the future is happening now, right in front of you. And the work of a futurist, like me, is trying to really see what’s happening in front of us—although we often don’t notice it—and to understand what implications these things might have.

Because every so often we look back and say: “Why didn’t we notice? It was right in front of our eyes!”

So here we go—ten reasons that the future is happening right in front of you. (This is the condensed version. If you want more I’ll be glad to give a talk as a speaker to give you a better understanding of where we’re heading and how your organisation can prepare).

  1. Everything is going faster and faster. You have probably noticed. But make no mistake, it’s a long term trend which as been going on for billions of years. And there’s nothing stopping this acceleration, simply because people are increasingly connected, and because we just cannot stop inventing. So remember—anyone who thinks that we might pull the brake and slow down the pace of change will be disappointed. However, we can still shape the future. So let’s talk more about it.
  2. Digitalisation is changing all conditions for what we do. Sure, digitalisation is a driver for change, but so has other technologies been. True. But the thing is—digitalisation changes the conditions for what we do in so many ways we almost cannot understand it. It’s not like the wheel, the printing press, the steam engine, or electricity. It’s so much more, letting you do basically anything more efficiently and smarter, mixing these things, copy them, and spread them to the world, pocket to pocket, at almost zero margin cost. Try to get the implications of that!
  3. We’re building horisontal networks at global scale. If you think about it, this has never happened before. Everything we have been doing at large scale through history have been hierarchical. And then suddenly we can build flat networks and communicate across the globe, peer-to-peer. Just look at what happened with news distribution that used to be hierarchical with editors etc. Then came news through social networks, and you know what happened. Very few had anticipated this. Most people thought that everyone would get access to the truth through the Internet. What happened was that everyone got the means to spread their “truth” across the globe. And this is just the start.
  4. Everything can be measured. Ok, so what’s new? We’ve always been measuring. Well, the new thing is that essentially anything can be measured like it was never measured before, in real time. Even the endings of episodes of tv series. And how machines are behaving. And how you’re feeling when you’re driving. And what illness you might have based on your breath. And we can analyse all these measurements and adapt our offerings, our actions, our strategies etc according to what we learn. Which means that you have to do this, unless you want to become irrelevant.
  5. The customer is more spoiled than ever. In what way? Well, simply because it’s possible to reach the world with whatever you want to offer, and because customers all over the world therefore have access to tons of stuff that they never could access before. So we raised the bar for what we think is good—if it’s not good enough we just go to another site. Meaning you have to do everything you can to keep the customer’s attention. It’s a war, and customers just won’t settle with less than excellence. This is why the customer experience is more important than ever.
  6. Sharing requires new business models. Why? Because sharing on digital platforms is so much more efficient than business as usual. This is what drives the shift from owning to accessing—of music, digital infrastructure, transportation, and more. So you’d better consider what sharing can do for your business.
  7. We can make a difference. Just ten or twenty years ago, very few people could make a difference at a global scale. Basically only global corporations and states had that power. Today, a few people with a good idea, using the Internet and cloud computing can make things that really change the world, even without aiming for profit. Wikipedia and Open Source Software are just two obvious examples. The consequence? People want to make a difference. People want purpose. Just ask millennials. Profit is not good enough.
  8. Everything can be mixed. Sugar and egg? No, I’m talking about digital stuff. Anything that have been digitised—quite a few things today—can be mixed with a few lines of code. Twenty years ago that was almost unthinkable. Today it’s a piece of cake, and probably the biggest unexplored source of new services and products. Just throw in some new data, a new game application, a new industry connection, or a new communications channel in the business you’re running, and you’re off in a new direction with new opportunities. Every day!
  9. AI can do stuff so you can focus on other things. You’re hearing about AI every day. No, it’s not human-like, yet. Far from it. But AI has made huge progress in the last few years when it comes to learn things that only humans could do, and it performs even better than humans. Playing advanced games. Optimising energy consumption. Interpreting chest X-rays. Distinguishing a fake smile from a real one. Translating. Answering questions. Predicting customer behaviour. Knowing what you like. Etc. But only in narrow niches. The fundamental take-away for businesses is that you just need to let machines take over everything that they can do so that humans can focus on the only thing that matters—customer experience. Simply because every new player in your field will do so, and if you’re not, you’re done.
  10. Humans will become more human. It’s easy. Since machines can focus on what machines are good at—boring, repetitive, and dangerous tasks—humans can focus on what machines are not so good at (yet)—creativity, capability to convince and motivate, empathy, and context and collaboration. In other words, human stuff. So if we let machines do what we’d rather not continue doing, we can focus on becoming more human. Or: work is for machines, life is for people.

Then there’s the eleventh point: Machines will become… Well, I have some ideas, but I could talk more about that in a speech. Just a hint—try to imagine machines becoming better at communicating with each other and at sharing their knowledge and their experience.

So, did you think that the future will be more or less like today, just with a few more gadgets? Now, maybe not so much. These ten points are not small changes. Then add the first point again—it’s going faster and faster.

OBVIOUSLY there’s an immediate conclusion: Without increased efforts for sustainability everything will go out of balance. Sustainability is therefore the most important aspect if you want to prepare for the future. Fortunately there are ways to solve the global warming, and a new compact, cheap and carbon-free energy source is part of the solution. But that’s another story.

If you want to hear more, don’t hesitate to contact me for a speaker engagement!

The Future of the Nation-State

How the nation-state can find a way through digitalization. 


Note: This essay is published as chapter 17 in the book Digital Transformation and Public Services: Societal Impacts in Sweden and Beyond. The book is the final report from The Internet and its Direct and Indirect Effects on Innovation and the Swedish Economy—a three-year research project funded by The Internet Foundation in Sweden (IIS) and led by Professor Robin Teigland, whom I had the great pleasure to collaborate with in this project, as well as with editor Anthony Larsson.

An e-book edition of the book can be downloaded for free from Amazon, and a pdf version can be downloaded from Taylor & Francis.


1   Introduction

It is commonly recognized that the Internet and digital technologies are bringing about a fundamental and sometimes disruptive change to businesses, society, and the lives of individuals (Kenney, Rouvinen and Zysman, 2015). The overall phenomenon is often referred to as digitalization, whereas the process of effectively adapting to digitalization is called digital transformation.

This chapter aims at investigating the potential future of the nation-state in the context of change brought about by the Internet and digitalization. Will the nation-state go through a process of digital transformation, altering its characteristics to be more in line with the conditions of a digitalized world and with the changed expectations of its citizens, maintaining their support? Or will it be completely disrupted and potentially replaced by some other organizational structure, better adapted to meet the future demands of people and organizations living together on our planet, in a world shaped by digital technologies?


2   Methods

The underlying theory that will be used in this chapter is a framework called the innovation loop, developed by the author, loosely inspired by evolutionary theories, mainly the concept of natural selection and the survival of the best adapted1 by Charles Darwin, and the theory of how innovations diffuse, presented by Everett Rogers in his seminal work Diffusion of Innovations (Rogers, 1962).

Figure 1 The innovation loop
Source: Model by the author.

The basic principle of the innovation loop (see Figure 1) is that evolutionary steps in the biological system can be compared to human inventions and that, in a generalized sense, the evolution not only of species but also of technology, businesses, and society can be seen as a result of natural selection and survival of the best adapted – be it organic beings, individuals, products, services, processes, organizations, or societal structures.

A natural phase of the loop to start looking at is the point of an invention. Although human inventions are mostly made through human thinking, while biological evolutionary steps occur through random variations such as mutations, both are based on previous steps, and both, if favorable, diffuse gradually.

Biological evolutionary steps diffuse through inheritance and natural selection and human inventions through individuals’ varying tendency to adopt them, as described by Rogers with concepts such as early adopters, early majority, late majority, and laggards.

When a favorable evolutionary step or an invention reaches a certain level of diffusion, it starts changing the conditions for everything in the surrounding system. This is most notable for larger evolutionary steps and larger inventions, with examples ranging from sexual reproduction and sight to the wheel, the printing press, the steam engine, and the Internet.

As a consequence, organic beings, individuals, and organizations will have to adapt to the new conditions, and those who are best adapted will be the most successful. This is arguably also valid for products, services, and structures, which are being adapted by humans according to changed conditions. The adaptation process is similar in the biological system and in the innovation system, especially regarding behavioral adaptation, and it is effectively described through the concept of natural selection.

One recent example could be a person adapting to the availability of an Internet connection, starting to write emails instead of letters and searching for information on the web instead of in libraries and books.

It is worth noting the importance of diversity in the process of adaptation – the less diversity, the less variety of ways to adapt. Moreover, the less variety of ways to adapt, the higher the risk of weak points being shared by various, multiple entities in the system. This would in turn make the system, as a whole, more vulnerable. To a certain extent this is also true for the diffusion phase, and altogether this is arguably the fundamental reason for striving for increased diversity of all sorts in all systems and in all situations since change and innovation occur continuously everywhere.

Once the initial adaptation process has been established, an extended form of adaptation takes place. This phase consists of various types of experimenting, for example, combining the initial invention or evolutionary step with others already existing in the system, or exploring opportunities to build inventions on top of the initial one. Humans do this through innovation, while nature uses random variations such as mutations. Since this phase leads to new inventions, it completes the cycle: innovation, changed conditions, and success of the best adapted.

An example of the last phase would be the combination of the Internet and of commerce, paving the way for E-commerce.

A few more observations can be made regarding the innovation loop:

  • It’s not a serial process. Many loops are running in parallel, interacting with each other.
  • The result of each cycle can be described as increased self-organization and increased efficiency. This can be better observed looking at a large time span, billions of years back. Early life started with single-celled prokaryotes that evolved into multicellular eukaryotes, then into plants and animals – animals through reptiles and mammals, into apes, eventually walking on two legs, gaining intelligence, and evolving into homo sapiens, inventing the spoken language, agriculture, the wheel, the written language, the printing press enabling the scientific revolution, the steam engine enabling the industrial revolution, electricity, telephone, radio, the Internet, the World Wide Web, the smartphone, and further inventions now being developed. Unless there is a divine power influencing this evolution, what we are looking at is one single self-organizing system. Although the smartphone should hardly be considered the “crown of creation,” efficiency is steadily increasing in the meaning that what can be achieved with a certain amount of energy, resources, and number of organic beings has never been more than today.
  • The pace of change is constantly increasing, which can be concluded noting the time span between early evolutionary steps, for example, about two billion years from eukaryotes to prokaryotes, and comparing it to the time span between recent major inventions, for example, about 17 years from the World Wide Web to the smartphone. Ray Kurzweil has proposed an explanation for this increase in pace of change in his essay “The Law of Accelerating Returns,” where he argues that “Evolution applies positive feedback in that the more capable methods resulting from one stage of evolutionary progress are used to create the next stage. As a result, the rate of progress of an evolutionary process increases exponentially over time. Biological evolution is one such evolutionary process. Technological evolution is another such evolutionary process” (Kurzweil, 2001, paras.11, 15, 16).
  • The evolution described by the innovation loop will arguably continue, resulting in increased self-organization and increased efficiency at an ever-increased pace of change until it potentially fails because of lost stability, for example, due to failure to balance available resources with consumption and recycling of resources.

The framework of the innovation loop will be used in this chapter to analyze the roots, the construction, and the development of the nation-state from the perspective of change brought about by inventions. Specifically, it analyzes the future evolution and the fate of the nation-state under the influence of the Internet and digital technologies – arguably the most important field of inventions of our time. The invention-related analysis will be integrated with ideas and findings on the topic of the future of the nation-state presented in a number of articles and academic papers, and with opinions and ideas expressed in personal interviews with a series of people throughout April 2018 to May 2018.


3   Results and discussion


3.1     Definitions

Essential to this analysis is a definition of the nation-state. There is some common confusion about the use of the terms nation, state, nation-state, and nationalism. The definitions that will be used here are that nation is a cultural term referring to a body of people bound together by certain aspects that give them a sense of unity, making them feel that they have something in common and differ from other people; state is a political term, referring to a body of people who live on a definite territory and are unified under a set of institutional forms of governance, which possess monopoly of coercive power and demand obedience from people; and the nation-state is a merger of the two terms, implying that politics and culture support each other (Lu and Liu, 2018).

It must be noted that although nation is a cultural term, what gives people a sense of unity does not have to be a common history and a culture derived from a common descent or ethnicity. The source of unity might also be democracy and a common political will where the members have equal rights. These two approaches are commonly called the ethnic approach and the civic approach (Hutchinson, 2003; Lu and Liu, 2018).

This is also why the nation-state is not immediately related to the concept of nationalism, which could be seen as a strong emphasis on the ethnic approach of the nation, often better explained by the term ethno-nationalism.

Also, the state can be related to the two approaches. The ethnic approach argues that people’s trust in the state derives from deeply rooted cultural values that are learned from an early age, leading to an interpersonal trust, which in its extended form builds trust in the state. The civic approach, on the other hand, maintains that the trust in the state is built on the performance of political institutions (Lu and Liu, 2018).


3.2      Origins and development of the nation-state

Looking at the nation-state from the perspective of innovations, its early roots can be considered to go far back in human history. Spoken language – one of humans’ first and most important inventions, which is believed to have evolved between 50,000 and 200,000 years ago – changed the conditions for human interaction by introducing the oral tradition. This is arguably an important base for human culture and thus for building culture-based communities, which in turn are a building block for the nation according to the ethnic approach.

The invention of agriculture from about 10,000 years ago brought people closer together in villages, and a reasonable adaptation might have been improved methods for defining property and for resolving disputes. Another communications technology, written language, which was invented about 5,000 years ago, probably helped in that matter. This could be seen as the early roots of the state.

Going ahead in history, there are different opinions among scholars whether the nation was formed as a consequence of the state or the contrary.

Early forms of modern states from the 13th century and onward are by some considered to have been formed through war, where warfare contributed to an increasingly centralized administration in order to impose taxes and enforce order (Hutchinson, 2003). Centralized and more complex administration was also developed due to improvements in agriculture productivity, which made it possible to sustain larger populations. Inventions and discoveries in other areas such as political economy, mercantilism, and cartography further strengthened the state, all leading to people in nations being more united, thus indicating that the nation emerged as a result of the state.

On the other hand, inventions such as the printing press gave rise to increased literacy, strengthening of national languages, and sharing of common knowledge, tales and culture, while it also led to increased reliability of trade for example, through the spreading of knowledge of techniques like double-entry bookkeeping, all entailing an increased interconnection between people, potentially building a sense of unity, which in turn could support the state. This consequently suggests that the state was built on the nation (Anderson, 1991).

Regardless of whether the nation led to the state or the other way around, or  a combination of both – which might seem reasonable – another observation remains: inventions in fields such as warfare, agriculture, printing, trade, and communications all changed the conditions for people and society. Consequently, people and society had to adapt to these changed conditions in ways that gradually strengthened both the nation and the state.

The merger of the two, that is, the nation and the state, and the subsequent establishment of the nation-state in its modern form, is commonly considered to be a result of the Treaty of Westphalia in 1648 (Schwartzwald, 2017). Until then, many different powers, religious as well as civic, had an overlapping authority on nations, territories, and populations. The essential outcome of the Peace of Westphalia was that states were guaranteed sovereignty over a nation, its territory, and its population, thus forming a nation-state, and that other states would refrain from interfering with internal affairs in neighboring territories, for example, by supporting foreign co-religionists in conflict with their states.

The resulting international system of independent and sovereign nation-states has been compared to billiard-ball interactions – an anarchical society of external interactions between states (Thompson and Hirst, 1995).

From an internal perspective, it did not matter in this system whether states were empires or based on homogenous nations or if they were autocratic or democratic. But again, inventions influenced the evolutions of states. It has been argued, for instance, that improved communications technology – roads, rail, and the steam engine – made it technically possible to bypass local leaders and impose direct rule as opposed to indirect rule, thus ending the age of empires (Hechter, 2001; Hutchinson, 2003). Reasonably, other communications technologies – printing, telegraph, telephone, and radio – over time reinforced this trend. Such technologies also supported the spreading of information and the evolution of democratic systems. And, in fact, democracy is often considered to have given legitimacy to the sovereign power of the state, replacing a sovereign autocrat or king, eventually including citizens and binding them together and thus strengthening the nation-state (Thompson and Hirst, 1995).

Representative governments could also create uniform national systems for administration, education, and public health, again supporting increased inclusiveness and homogeneity of the population. During the 20th century, states also acquired the means to manage national economies– through state planning in the Communist world and by Keynesian measures in the Western world.

This is seen by many as the final glorious days of the nation-state before its demise under the pressure of globalization and digitalization. One early blow to the logic of a world composed by independent and autonomous nation-states was the end of the Cold War in 1989, when the Berlin Wall came down and when US President George H. W. Bush and Soviet General Secretary Mikhail Gorbachev met at the Malta Summit, making declarations of cooperation and peace. Already the Cold War in itself, emerging as a result of nuclear weapons technology, had destabilized the idea of independent nation-states, since the devastating power of nuclear weapons had made it essentially impossible for nuclear powers to win a war. The prospect of a nuclear conflict was unthinkable destruction on both sides, and thus the idea of settling disputes with war in the anarchical system of independent nation-states became impassable.

Yet, a nuclear attack remained a palpable threat and a possible scenario for many during the Cold War, making states necessary. With the end of the Cold War, however, this argument became more diffuse, while the effects of an increasingly global economy gained more attention.

A common view since then, summed up by the term globalization, is that in a global economy, market forces are stronger than any nation-state. Capital is mobile, while labor is not, meaning that capital moves where the conditions are most favorable, forcing nation-states to compete by providing such conditions. The nation-state thus has had to give up some of its autonomy and sovereignty – the exclusive power to independently manage instruments such as national labor rights and monetary and fiscal policies. From this perspective, nation-states are becoming to the world what local authorities used to be to the state – providers of desired conditions for attracting businesses without the power to shape economy or employment within its territory.

This view, at least in its most far-reaching form, has been contested, for example, in the article “Globalization and the Future of the Nation-State” (Thompson and Hirst, 1995). The authors admit that there is some truth in the globalization view – states are less autonomous, they have less exclusive control over the economic and social processes within their territories, and they are less able to maintain national distinctiveness and cultural homogeneity.

Thompson and Hirst (1995) argue, however, that the changes are not as profound as they may seem for a series of reasons, among them that the number of genuine transnational companies (TNCs) is small, foreign trade flows and patterns of foreign direct investment are highly concentrated to advanced industrial states and a small number of newly industrialized countries, the thesis that capital is moving inexorably from high-wage advanced countries to low-wage developing countries is inaccurate in aggregate, and the evidence that world financial markets are beyond regulation is by no means certain.

An example of the last point is a series of international investigations, blacklists, and measures against tax havens in recent years, in order to reduce tax avoidance, estimated at £506 billion each year (Boffey, 2017).

Thompson and Hirst (1995) also argue that the reduced autonomy of the nation-state doesn’t deprive it of an important role in a global system of governance. Instead, because of its relationship to territory and population, the role of a nation-state is pivotal as a source of legitimacy for transferring power both “above” it and “below” it – above through agreements with other states in various international organizations and bodies, and below it through a constitutional balance within its own territory among central, regional, and local governments, and also with publicly recognized private governments in civil society.

According to Thompson and Hirst (1995), nation-states can do this in a way no other agency can, because they provide legitimacy as the exclusive voice of a territorially bounded population. They admit that such representation is very indirect, but that it is the closest to democracy and accountability that international governance is likely to get.


3.3      The impact of digitalization

In the last decades, global mobility of technology has been added to that of capital. With the broad diffusion of the Internet, technology and information are vastly more mobile than they ever were before.

From the perspective of the innovation loop, a few of the major changed conditions brought about by the Internet and by the first wave of digital technologies are that:

  • Computer software can be used to effectively manage, process, and analyze most kinds of information and contents, making an increased level of automation possible.
  • The cost of a digital copy is essentially zero – thus, once information, processes, contents, services, products, or even weapons have been digitalized, they can be copied at large scale at a minimal cost.
  • The whole world can be reached with one click, not just with a phone call as through the telephone network, but with basically anything that can be digitalized.
  • Once products and services have been digitalized, they can be mixed with other products and services much more easily than ever before.
  • The Internet connects people peer-to-peer across the world, which means that horizontal structures for human activity, as opposed to hierarchical or vertical structures, which have been the norm for thousands of years, are becoming functional at a large scale.

Among the consequences or adaptations to these new conditions are:

  • A major change in business models with a general shift from owning to accessing and from products to services, since what is digital can be copied almost infinitely and what is not digital can be shared efficiently on digital platforms (often referred to as the sharing economy), with a subsequent concentration of users to a few large digital platforms through the network effect2
  • An increased reliance on horizontal structures such as social networks and informal networks for knowledge and news distribution, with partially unexpected effects such as trolling and fake news
  • An increased focus on purpose in human activities – for example, in businesses where purpose may become as important as the basic requirement of long-term profitability – reasonably as a consequence of the opportunities the Internet offers for horizontal collaborative work with significant global impact at a scale that was only attainable by states and global corporations a few decades ago, for example, Wikipedia and open-source software
  • An increased focus on customer experience since it is easier than ever to launch a product or a service offering with limited resources, targeting the whole world, and since this entails intensified competition for people’s attention and steadily higher expectations among users and customers for ease, convenience, and transparency
  • The emergence of a fourth military branch beyond Army, Air Force, and Navy – a cyber-warfare branch, along with a generally increased focus on cybersecurity in businesses and in society, as a consequence of the growing possibilities to perform highly effective cyberattacks, ranging from criminal activities aiming at economic gain, to state-supported activities with the purpose to inflict damage on enemy countries

The collective term for these phenomena is digitalization, whereas the ways in which businesses or public entities or agencies adapt in order to remain competitive and relevant in a digitalized world is often called digital transformation.

Although digitalization is regularly perceived as an established and stable ongoing process, it can be argued that it is only at its beginning, noting that most organizations, private or public, only just started their digital transformation.

Furthermore, before the first wave of digitalization has even reached its full power, a second wave is emerging with technologies such as AI and machine learning; natural language processing and cognitive computing, with effective voice interaction and human-like dialogue; the Internet of Things with networks of sensors and actuators – being to AI what our senses and limbs are to the brain; advanced automation also mastering mental work; and blockchain-based applications distributing a range of services and structures, potentially eliminating the need for controlling bodies and institutions such as banks, governmental agencies, and even courts.

Combining these technologies, the second wave will bring changed conditions that are difficult to anticipate, to say nothing of people’s ways to adapt to such changes. As mentioned before, the pace of change is also increasing, which gives a hint about further inventions adding up to a third wave, and a fourth, and so forth.

The main question that this chapter aims at addressing is whether it will be possible for nation-states to remain relevant through digital transformation, and in that case how, or if they might be disrupted or become obsolete and replaced by some other structure for governing the world’s populations and territories.

To understand how the nation-state can adapt to digitalization, it may first be useful to establish what the nation-state offers that is adaptable. From the initial definition, we find that the nation-state provides unity, a territory, and a form of governance. Through its institutions, it also supplies a number of services.

Out of these four aspects, both services and governance can be effectively adapted from a digital transformation perspective. Unity, in contrast, could be considered to depend on second-order effects of digital transformation, while territory is a physical aspect that turns out to remain important in a digitalized world. This will be discussed later.

Adaptation of services in a digitalized world involves not only services offered by the nation-state itself. Also, services offered by alternative providers must be taken into account, since digital technologies and the Internet, as noted before, make it possible for service providers to target the entire world, even with fairly limited resources.

The ways in which governance may be adapted to digitalization, on the other hand, depends largely on the form of governance. Noting that democracy is considered to have strengthened the nation-state through increased legitimacy, the question of governance will here be focused on adaptation of the democratic process, although effects of digitalization on authoritarian states will also be discussed further down.

Hence, from a digitalization perspective, there are three important aspects of digital transformation of the nation-state – efficiency of services offered by the state to citizens, alternative providers of those services, and the structure of the democratic process.

1. Among the main services offered to citizens are health care, child/elder/social care, education, infrastructure, law and order, and defense, all paid for by tax revenue.

Like any other service, all these can be made more efficient through digitalization. This is necessary since the expectations of individuals and organizations are increasing, while tax revenues will not grow substantially. The increased expectations derive partly from people’s experience of the large range of various services offered online, with a significant increase in convenience and ease of use compared to only a decade ago, all at a low cost or for free (in exchange for access to people’s personal data).

In this way, people have an indirect understanding of the quality improvements and efficiency gains that are possible through digitalization, using, analyzing, and combining data flows; providing well-designed user interfaces; and so on, and they naturally expect the same improvements in public services. In short – the conditions have changed and public services have to adapt in order to remain relevant, which also goes hand in hand with the increased focus on customer experience brought about by digitalization.

In many nation-states, the digital transformation of public services is ongoing, but since these services are normally not exposed to market competition, there is not any immediate risk of being outcompeted, and the driving force for change must, therefore, derive largely from an insight among leaders and those responsible.

An example of a country that is considered to have reached far in its efforts to digitally transform its services is Estonia (Heller, 2017). Apart from many services being accessible online, three technology-related aspects creating conditions for the Estonian digital welfare state are:

  • A government-issued electronic ID for all citizens
  • The “once only” principle, which means that no single piece of data should be entered or collected twice
  • The governmental data platform X-Road, which links servers and systems to each other through encrypted connections

From a privacy and integrity point of view, it can be noted that any access to an individual’s personal data by a public officer or a professional is recorded and reported.

Besides improving services and making them more efficient, there are also discussions about the possibility for the public sector to partner with alternative providers from the private sector and with civil society. An example is the Swedish Association of Local Authorities and Regions (commonly known internationally as “SALAR,” or “SKL” in Swedish).3 Klas Danerlöv, Innovation Manager  at SALAR, refers to the demographic challenge, to steadily increased expectations among citizens, and to the need for working in smarter ways due to lack of resources already visible in budgets, and he discusses a change in definition of the role of the public sector, from producer to facilitator, collaborating with private service providers and with civil society in areas such as digitalized health care and emerging mobility solutions. He explains that such a change depends partly on legislation since today’s rules for procurement are rigid and prevent the public sector from cooperating with industry in a development-oriented manner. He also envisions that the fundamental responsibility of the public sector might become narrower, caring for exposed groups and for those not able to use digitalized services (K. Danerlöv, personal communication, April 16, 2018).

Regarding digitalization of national defence, it may be noted that with the emergence of cyber-weapons and of the military cyber-warfare branch, military activity is becoming increasingly digital since it is possible to inflict significant harm to enemy states by attacking essential infrastructure through digital networks and devices, with a minimal risk for human and material losses. Also, since it is also possible to make attacks without being directly exposed, it becomes less evident who the enemy really is, making warfare less effective for supporting national identity.

2. The aspect of alternative service providers goes further due to the Internet’s global reach. Such providers can essentially be located anywhere in the world and, depending on local legislation, provide their services to citizens of various countries on a global market. The same goes for nation-states that can offer services to their own citizens worldwide, through the Internet and through collaboration agreements with local or global providers.

Interestingly, nation-states can also offer their services to citizens of other countries, which they already do. Estonia offers an e-Residency – a kind of limited citizenship in the form of a government-issued digital ID available to anyone in the world, offering the possibility to easily start and run a business in Estonia, which is part of the EU (Republic of Estonia, 2014).

Quite obviously though, the digital ID is not valid as a physical identification or a travel document, it does not serve as a residency permit, and it does not grant its holder the right of physical residency in Estonia. This aspect, regarding physical territory, people’s physical movements, and the rule of law, will be discussed further below.

The concept of e-Residency is also related to private service offerings from alternative providers and to the question of how extended these offerings could eventually become. Let us say that a digital giant like Google, Apple, Amazon, or Microsoft puts together packages of social services such as online education, digitalized health care, cybersecurity, transportation as a service, and more, and combines these packages under the umbrella of a private digital citizenship – and in this way takes on the role of a private virtual state, PVS. At what extent would that be an alternative to a citizenship in an existing nation-state?

Essentially, this comes down to two considerations:

  • First, the PVS offering the services will probably not have its own territory, and thus it will have its head office in some jurisdiction, having to obey the corresponding legislation. Potentially, it could acquire e-Residencies like the one offered by Estonia, for local branches in countries offering this opportunity, thus adapting to the local legislation in each country.
  • Second, users acquiring private citizenship will live in other territories. Either they will have a citizenship and a residency in the corresponding nation-state or the PVS will have to enter mutual agreements with nation-states, defining matters for users such as residency, work permits, taxation, and also the validity of some kind of passport or other travel document issued by the PVS.

Both considerations highlight an opportunity for nation-states to systematically offer extended versions of Estonia’s e-Residency, targeted toward PVSs, paving the way for what could be called the State as a Platform, SaaP.

A PVS successfully concluding such agreements, attracting a large number of users, could be described as a popular state owning no territory, fitting into an often-cited pattern in digitalization: Uber, the world’s largest taxi company, owns no vehicles. Facebook, the world’s most popular media owner, creates no content. Alibaba, the most valuable retailer, has no inventory. Moreover, Airbnb, the world’s largest accommodation provider, owns no real estate.

It also fits one of the main adaptations to digital technology mentioned above – a general shift from owning to accessing.

However, a citizenship is different from a taxi ride or a night’s accommodation. If Uber or Airbnb goes out of business, users might suffer a short inconvenience. But if this happens to a PVS, users might become stateless.

This would be acceptable only if there are a number of PVSs for users to immediately choose from, a situation that could possibly emerge after years of trial and error with pioneering PVSs offering services to early adopters who maintain their original citizenship as a back-up.

A number of PVSs for users to choose from would also in a way solve the issue with lacking democratic influence by citizens on a PVS. Since a PVS is a private company, it is essentially autocratic, being controlled by its owners who are not substituted through elections. If there are other PVSs available, users or citizens could simply “vote with their feet.“ If, on the other hand, one major PVS dominated, for example, due to the network effect, this would not be the case and the lack of a democratic process would become a serious issue.

Other aspects of the PVS scenario are economic, social, and political. Would PVSs paying for SaaP manage to provide citizenship and social services more efficiently than nation-states, at a lower cost than ordinary taxation? Or would they profile citizenship as a premium product for which people would be willing to pay more? If so, would such a selection be acceptable? Could PVSs over time become the partners that organizations such as Swedish SALAR envision, while nation-states take care of exposed groups? Or should PVSs, through international regulation, be obliged to receive all users, and if so, at what cost?

Adding to this is a final question – whether a PVS would need something corresponding to a nation, that is, a sense of unity among the users or the citizens, in order to survive. As will be discussed below, this is also a challenge for multicultural federations of nation-states, and for this reason, it’s not obvious that a PVS could ever reach the stability of a nation-state, strong enough to keep it united in situations of crisis. Rather, also with regard to the issue of lacking democratic influence, PVSs are more likely to be what they are – commercial entities offering convenient and attractive services, entities that may come and go, with a limited long-term engagement from the users.

However, the PVS model also exposes an opportunity for nation-states to adapt and remain relevant to citizens with a growing interest in a cosmopolitan set of values. Building on the kind of agreements that PVSs would have to enter with nation-states, such agreements could also be made between nation-states in a systematic way, similar to the roaming agreements between mobile telephone operators in different countries that allow their users to continue using their mobile  phone while traveling abroad, paying for the usage to their home operator.

In a corresponding way, citizen-roaming agreements between nation-states would allow citizens to move freely between the two countries, benefitting from social services and rights as local citizens, also paying local taxes and being answerable to local duties. It could be seen as an extended version of the freedoms in trade blocks such as the EU, with obvious challenges, not the least regarding interstate trust, but also with increased possibilities from an administrative point of view in a digitalized world, with conceivable digital entities such as clearing houses for citizen-roaming. As opposed to a PVS, a nation-state offering citizen-roaming would be able to provide long-term stability and democratic influence, but whether one model would be better adapted than the other to future conditions in a digital world remains to be seen.

Related to this topic, leaving the physical world behind, are virtual worlds, which could be seen as a kind of private virtual reality states. Existing only in virtual reality, there’s no need for agreements with states in the physical world nor for travel documents or for physical services such as health care, transportation, and so on.

Not providing for people’s physical needs, virtual worlds can hardly be considered an alternative to nation-states. They may, however, have a certain influence on real-world economies – substantial economic transactions are already made for virtual goods in virtual worlds such as SecondLife, and this may increase with improved technology for virtual reality. It is also likely that when virtual reality reaches a sufficiently high-quality experience, people will spend significant time in virtual worlds, both for work and for pleasure.

3. The third aspect mentioned above regarding the digital transformation of nation-states is the structure of the democratic process. Two issues emerge on this aspect.

First, there seems to be a possibility to effectively manipulate individuals’ opinions and world views through digital means, without people knowing it or noticing it. This would put democracy, as we know it, at risk.

Second, today’s democratic systems have not yet adapted to the way the Internet and digital technologies have changed the conditions for information distribution, making geographic distances essentially irrelevant from an information perspective.

The first issue – manipulation of individuals – came to general attention in  the late 2010s partly due to events regarding the company Cambridge Analytica. The company had reportedly collected massive amounts of personal data through Facebook, and built individual profiles from the data, relying on research in psychometrics. The profiles had then been used to expose individuals at a large scale through social networks to personally tailored political advertisements designed to activate a certain response, with the aim to move the electorate in a desired direction (Cadwalladr and Graham-Harrison, 2018).

Although not conclusively proved to be effective, the basic idea is in line with the capabilities of digital technology today – analysis of large amounts of data, personalization based on such analysis, and individual targeting through social networks, all at a massive scale. Exploiting the human nature with its fairly predictable responses to certain stimuli also seems to be viable, but possibly further progress has to be made – a few cycles in the innovation loop – until the method becomes effective.

Possible scenarios could include tuning the output from AI-based personal digital assistants or finely manipulating the search results of a large search engine – a plot that has been credibly described in the novel 11 Grams of Truth (Swe: 11 gram sanning) (Akenine, 2014).

Any such scenario becoming reality would be a significant threat against the democratic process and also to any nation-state that depends on democracy for its legitimacy.

One proposed model that could be resilient to such manipulation attempts is deliberative polling, which is a version of deliberative democracy, a term coined by Joseph M. Bessette in 1980, meaning that democratic decisions must be preceded by authentic deliberation – long and careful consideration or discussion – not just voting (Bessette, 1980). The idea behind deliberative polling, which was proposed by James Fishkin (1988), is that rather than having a whole population of relatively uninformed people make decisions in a matter, it is better to have a random, representative sample of the population make an informed decision after thorough briefing and discussion. Such a deliberative process normally takes place at a physical meeting during a whole weekend, with the model having been tried out in a number of countries in the world on various occasions (Center for Deliberative Democracy, 2019).

The second issue – adaptation of the democratic process with regard to new conditions for information distribution – is connected with an often-expressed worry over a general decline in political activity and political interest in democratic states, particularly in younger generations. Although it is contested that Millennials are less politically active, it has also been found that a decline in voter turnout in established democracies during the last four decades coincides with decreasing levels of people’s personal interest in politics and a declining trust in traditional democratic institutions as vehicles for personal fulfilment and well-being (Dalton, 2016). This decline in interest and trust can be effectively explained by a shift in priorities and values of Western citizens. As people have begun adapting to a higher material living standard, they are increasingly prioritizing autonomy and individual lifestyle choices over more basic economic necessities and class divisions, while traditional forms of community life and interaction have largely eroded, giving space to individualism and social isolation (Ferrini, 2012).

More specifically, from a digitalization perspective, it has been found that improved digital infrastructure and increased digital freedom in a country have opposite effects on people’s trust in the state and their sense of unity as a nation, counterbalancing each other.

Improved digital infrastructure increases people’s trust in the state for providing effective tools and services, while it also brings about social fragmentation and cultural individualization, which weakens national identity. On the other hand, digital freedom strengthens national identity since offering everyone an opportunity to speak constitutes an essential feature of democratic fairness, and democracy is found to serve as an ideological link to unite people into a nation. But digital freedom also has a negative effect on trust in the state since no institution in an open and free society can escape criticism from some segment of society.

According to these findings, the nation-state can reach increased strength through digitalization only if there is a balanced development of digital infrastructure and digital freedom (Lu and Liu, 2018).

The same study also finds that as a country becomes more Internet-connected, people’s attachment to the nation-state tends to derive more from the universal appeal of democracy (the civic approach mentioned before) than from the particular appeal of ethnicity (the ethnic approach) (Lu and Liu, 2018).

Hence, the question we ask is how the democratic process could adapt with regard to:

  • Internet-based information distribution, which has made geographic distances essentially irrelevant
  • People tending to prioritize individual choices while yet placing great value on digital freedom and democracy as unifying aspects

It is probable that a democratic process offering citizens a more direct influence at regular occasions, from wherever they are located at the moment, would be a favorable adaption to such new Internet-related conditions.

One proposed model for democratic processes, which could fit this requirement, is liquid democracy, a system where voters can either vote directly or delegate their vote to other participants. Voters may select different delegates for different issues, and they are free to withdraw their delegation at any time. People who have received the right to vote for other voters can in turn delegate these votes to other delegates.

Even though this kind of voting system has roots going back to the 19th century, digital technology can help to make implementations of liquid democracy substantially more effective, flexible, and easy to use.

It is worth noting that digital voting through the Internet is generally met with significant skepticism, mainly for lack of transparency and for the risk of voters being exposed to coercion when voting at home or in other non-controlled environments. However, there are solutions to both these issues, which have been implemented, for example, in digital elections in Norway, but for such solutions to gain trust among voters, more time will probably be needed with successful experiments (Lewan, 2013).

Finally, regarding the aspect of the structure of the democratic process in a digital perspective, it can be noted that both deliberative polling and liquid democracy are in line with an increased reliance on horizontal structures and an increased focus on purpose, mentioned above as adaptations to new conditions brought by digitalization.

These three aspects of the digital transformation of the nation-state – efficiency of services, alternative providers of those services, and the structure of the democratic process – can be considered fields where the nation-state has to adapt in order to remain relevant to citizens or, in other words, successful.

A fourth aspect as a consequence of digitalization can be added – the issue of future taxation and revenue sources. Since one main part of today’s tax revenues comes from income taxes, the main question here regards the future of work. Recent progress in AI and machine learning, specifically in the field of deep learning, has shown that increasingly advanced tasks that only humans used to be able to perform can be automated, not only physical work but also mental work. In many cases machines even outperform humans, and there’s a general concern that machines eliminating human work will lead to high levels of unemployment. However, efforts at predicting whether AI and automation will eliminate more jobs than they will create, or the contrary, provide results pointing in all directions and differing by tens of millions of jobs only in the US, indicating that it is difficult to know which effect will prevail (Winick, 2018). It is also true that rather than eliminating jobs, AI and automation are expected to eliminate tasks within jobs, while jobs will be transformed to focus on tasks that humans still do better than machines, such as creativity, convincing and motivating other people, empathy, and fine dexterity.

Thus, whether tax revenues from income tax will decrease drastically due to unemployment is unclear, but it is certain that the future of work will be different than work of today. In the case that unemployment would rise significantly, solutions such as negative income tax or unconditional basic income have been proposed as solutions for distributing resources to citizens. As for an alternative tax revenue base, different forms of increased sales taxes are often suggested, with generous tax deductions for consumption that can be connected with present or future professional activity, ranging from equipment to investments in education, as an adaption to the trend of increased self-employment in the gig economy.


3.4      Threats from supra-states, localism, and cosmopolitanism

Apart from changed conditions due to digitalization that force nation-states to adapt, the most commonly discussed threat to the nation-state is perhaps the combination of supra-state and intergovernmental institutions such as the EU on one hand, and localism with claims of independence by populations such as the Kurds and the Catalans and by regions such as Scotland and Quebec on the other.

As for the EU, which could be seen as the most advanced attempt at a modern federation of nation-states, there are different ways to explain the main reasons for building the union, ranging from national interests of gaining influence by being members of a larger player in world politics and international economy, to an effort for peace and collaboration, or a step toward a European state. The main question, however, is if it would be possible to build a sense of national unity among Europeans – either through the civic or ethnic approach – strong enough to mobilize the population to collective action in a crisis. This is effectively contested by Hutchinson (2003), who notes that the EU does not even possess a common language, let alone a bank of myths, memories, and symbols to convey a sense of belonging in a community of sentiment.

Referring to the United States, Australia, or Canada as successful models for building multicultural federations is of limited relevance for Europe since these are countries founded by immigrants who have found a common identity in the willingness to build a new culture in a new territory.

From a larger perspective, looking beyond “immigrant nations,” it is also worth noting that what characterizes successful constructions of multicultural federations, as opposed to federations that keep suffering from fragmentation due to cultural divides, can be explained by three elements – first, building political alliances on already existing cross-cultural voluntary organizations, such as reading circles, trade unions, political clubs and so on; second, providing public goods in exchange for taxes across all regions of a country; and third, having a shared language with which individuals can communicate and converse (Wimmer, 2018). None of these could be considered viable for the EU, except maybe the second under the condition that a federal tax is introduced, which is a remote option as of today.

Rather than becoming a European state, the future role of the EU and other large federations is more likely to be in line with what Thompson and Hirst (1995) argue – trade blocks that, together with international organizations and bodies, can exercise some governance over international economic activities, labor market policies, social and environmental protection, and so on, based on legitimacy derived from national-states.

The threat posed to nation-states by localism is somewhat different. Although it might lead to a few more new small states, trying to build strong societies on a local homogeneous unity as an answer to what might be perceived as an increasingly hostile globalized world is essentially a dead end.

From the perspective of the innovation loop, it is undeniable that the world is becoming increasingly interconnected, multicultural, and diverse through steadily more efficient and pervasive communications technologies. Any attempt at modelling a society on cultural homogeneity and exclusiveness will therefore necessarily lead to a society that is less adapted to the changed conditions in the world, and thus less competitive. Furthermore, as already noted, the more a society becomes Internet-connected, the less people will feel attached to the nation-state based on ethnicity, in favor of the universal appeal of democracy (Lu and Liu, 2018). In other words, the wave of ethnic and nationalistic ideologies sweeping over the world during the mid-2010s – promoting harder delineations of national borders, tighter immigration controls, stronger trade barriers, stronger celebration  of national symbols and of authoritarian regimes, more idealized depictions of nationalistic history, and so on, entailing increased racism and nationalist xenophobia, could be seen simply as a temporary swing of the pendulum of history.

If the innovation loop is a valid model for describing the origin of change, the pendulum could be expected to swing back and forth around a center that moves along the trajectory of continuous innovation and of adaptation to changed conditions. In that case, it is just a matter of time before ethno-nationalist and isolationist ideologies will turn out to be unsuccessful as a way to improve the workings of the nation-state, which in turn will make the pendulum swing back toward more open, interconnected, and multicultural societies. However, the attempt at turning back the clock with nationalist and separatist values is understandable, not only because of increased migration, which is often the main reason for making nation-states more closed. Some view the world as increasingly divided into two categories of people – one cosmopolitan with mostly urban people, sharing an international set of values, interests, and culture, communicating globally and, to some extent, also travelling globally, and another category of more locally rooted people, typically rural to a larger degree, and less able to harvest the opportunities offered by an increasingly interconnected world.

Since the latter category could be considered less adapted to new conditions emerging in a digitalized world, wanting to go back in time to a situation with more closed nation-states – the billiard balls mentioned before – is a natural reaction. Another important reason making many people wishing for time to stop is arguably a general fear of the future, which is vague to many, for natural reasons. While the pace of change is increasing, most political leaders themselves have vague ideas about the future, and credible, long-term, positive visions for a future world are clearly lacking.

This is not to say that the opinions expressed by the second category mentioned above are not important. On the contrary, what is important, and in everyone’s interest, is keeping society united, learning from diversity, and helping everyone to take part in the ongoing digitalization process since high tensions in society will be unfavorable for everyone.

On the other hand, one may ask whether the cosmopolitan category has such values and motivations that people in this group would rather abandon the nation-state in favor of some global alternative state, such as a PVS, uniting, for example, people living in large urban environments across the world.

Anthropologist Ulf Hannerz (2004), investigating the nature of cosmopolitanism, sketches a wide range of different flavors and origins. The range of these flavors and origins stretches from the international elite working in global organizations, moving across the world, to a Pakistani villager, not particularly well-educated, but a member of a Sufi cult and formerly a migrant laborer in the highly mobile, heterogeneous society of the Gulf, there picking up a series of foreign languages. Hannerz further notes that cosmopolitanism has two faces; one side aesthetic and intellectual with a happy face, and one political side with a worried face.

He raises the question of whether there could be a “thick” cosmopolitanism, corresponding to the unity required for building a national identity, and without coming to an answer, he observes that while cosmopolitans are often seen as root-less, cosmopolitanism can also be seen as the “privilege of those who can take a secure nation-state for granted” (Hannerz, 2004, p. 78).

It is thus not obvious whether a growing cosmopolitan tendency could be seen as a threat to the nation-state, or even as a support. It’s also clear that the number of far-reaching ideological cosmopolitans in the world is dwarfed by the sheer number of involuntary cosmopolitans – international migrants – reaching 258 million worldwide in 2017, up from 220 million in 2010 (United Nations, 2017).

The  increasing international migration thus appears as a stronger threat to nation-states, not only since it is a main reason for the backward-aiming forces striving for closed and isolated countries, poorly adapted for the world’s changing conditions. Migration is also likely to continue growing since the interconnectedness of the world lets information flow more easily, making the differences and the injustice in the world more apparent to everyone. This is further enforced since cultural homogeneity will be increasingly difficult to use by advanced states as an argument for exclusion, and closed borders will thus appear as what they are, a mere refusal of entry based on the lottery of birth.

Such a world order will become untenable, and it will remain a major threat  to advanced nation-states unless the differences decrease. Just like within Western nation-states, where improved communications technologies over time made a levelling of resources and opportunities between classes inevitable in order to avoid unsustainable tensions in society, it is unlikely that the world’s poor will passively accept their poverty, and a levelling of resources in the world will become necessary.

Apart from limiting the migration pressure on advanced nation-states, such a leveling would also strengthen the opportunities for the concept of citizen-roaming mentioned before, through improved conditions for interstate trust, and thus support nation-states in more than one way.


3.5     The rule of law

The final consideration to be made on the nation-state is the rule of law. A declining autonomy for nation-states to exclusively decide over internal politics does not diminish the importance of the state’s monopoly as a lawmaker in its territory. On the contrary, in a world that is more complex and interconnected on the one hand, and more individual and diversified on the other, the rule of law as a guarantee of stability, limiting the harm that individuals, companies, and the government itself can do, is of increased importance. The rule of law is also necessary for a structure of global governance to be effective, as a way for transforming international agreements into national laws and imposing such laws on the citizens. And although cities are often considered to be the main future actors of global collaboration since a majority of all people in the world now live in cities, nation-states are important for making cities and regional governments accountable according to the rule of law.

Apart from this, however, the rule of law is also a tool of power in the hands of the nation-state, which eventually comes down to the monopoly of coercive power and to the fact that, even in a digitalized world, all individuals have a physical body and all cables and servers that make up the Internet have a physical location.

The combination of the monopoly of coercive power and the control of a physical territory gives the nation-state a significant power over individuals and their movements, and also of the organizations they make up, which together with digital tools such as surveillance cameras, facial recognition, data analysis and so on, is strengthened rather than weakened. The power can be used by authoritarian states for control and oppression. However, also in democracies, even where the level of corruption is low, it remains a strong foundation for the nation-state to build its survival on, even when challenged.

We can look at the cryptocurrency Bitcoin as an example of such a challenge. Although the future of Bitcoin today is unclear, not the least for the unsustainable level of energy consumed by the Bitcoin system and for the low transaction speed, future versions might offer interesting opportunities for international peer-to-peer transactions without the need for controlling parties such as banks or financial institutions. Crypto-libertarians and/or crypto-anarchists,4 however, often consider Bitcoin one of the tools that could make the nation-state irrelevant, due to its distributed, noncentralized nature.

Also, the underlying database technology, the blockchain, may offer attractive distributed applications in a wide range of contexts, eliminating the influence and bias of controlling parties, but again, crypto-libertarians and/or crypto-anarchists see many of these opportunities as ways to subvert the state and its institutions.

Confronted with such a challenge, before it reaches critical levels, the nation- state could use the rule of law to effectively defend itself by drastically limiting the use of such applications. It is true that software applications through virtualization can be made to jump seamlessly in real time between physical servers across the world, but eventually, the ability of the state to reduce the usage is significant.

In the context of rule of law and digital technologies, a special note may be made on what could become a global competition between liberal democracy and digital authoritarianism (Wright, 2018). Digital technologies, in particular the latest evolution of artificial intelligence, have turned out to offer authoritarian regimes – for the first time in decades – a possible way to sustain long-term economic growth while controlling their citizens. China is by many considered to lead this development with large-scale Internet censoring and widespread surveillance through technologies such as face recognition, and with machine learning tools in combination with a “social credit system,” which according to the Chinese government will be rolled out nationwide for every citizen by 2020 (Munro, 2018).

Many AI-based machine-learning technologies depend on access to large amounts of data for training. Authoritarian regimes like China therefore have a significant advantage over liberal democracies in using such technology for analyzing and controlling citizens’ behavior, being able to access and combine people’s personal data with few restrictions and privacy concerns.

The official scope of the “social credit system” is to increase trust and to decrease the level of corruption and fraud in Chinese society. This is admittedly needed and some citizens also welcome it, but it is also clear that citizens will have limited opportunities, if any, to challenge the system.

It is an open question whether citizen control realized through digital authoritarianism will make countries like China more competitive and better adapted than liberal democracies to a digitalized world, but it is undeniable that China will give it a try. According to a recent report by the US government-financed democracy watchdog Freedom House, China is also actively spreading its methods and technology in this area to tens of other countries (Shahbaz, 2018).

Speaking against the success of digital authoritarianism is the aforementioned finding, that is, that the nation-state can reach increased strength through digitalization only if there is a balanced development of digital infrastructure and digital freedom (Lu and Liu, 2018).

However, as described by the innovation loop, neither democracies nor authoritarian states, will, in the long run, be able to resist necessary adaptations to new technologies changing the conditions for their existence; otherwise, they will collapse and be disrupted. Yet, the rule of law will give nation-states substantial capabilities to delay such a development, providing extended time for implementing change.

On the other hand, there is already another strong technology trend, apart from Bitcoin and the blockchain, hinting at a more decentralized model of society: local energy production with renewables such as solar and wind, but also with yet unexplored small-scale energy sources such as low-energy nuclear reactions (LENRs). If such energy sources over time can make wide-area power grids unnecessary, they will also bring the advantage of making societies less vulnerable to cyberattacks and cyber warfare, which typically target crucial infrastructure systems.

In a distant future, distributed technologies such as blockchain and hyperlocal energy production may one day provide the basis for a completely decentralized world order, akin to nature itself where all individuals and entities are independent of any state structure and free to interact globally, but where – in a difference from nature – an AI-based rule of law is built in to the decentralized system, limiting the harm individuals and organizations can do and thus avoiding the cruel aspects of the law of the jungle. However, it is only when humans may merge with machines (making it possible to transcend the physical body) that the power over physical territories will eventually lose its importance.


4 Conclusion

In an increasingly globalized, digitalized, and interconnected world, with mobility for capital and technology and for digital services and products, the nation-state is challenged but may remain a fundamental building block for governance and international collaboration. Nation-states are losing some of their sovereignty regarding exclusive control over economic and social processes within their territories, and they are less able to maintain national distinctiveness and cultural homogeneity. On the other hand, their role as a source of legitimacy for agreements made in international institutions, organizations, and bodies is becoming more important, as well as for ensuring accountability of cities and regional governments. From an innovation perspective, losing independence and gaining another role in an increasingly interconnected world can be seen as a natural evolution, being a consequence of steadily improved communications technologies through human history.

In order to remain relevant to citizens and to the world, however, nation-states need to adapt to new conditions posed by digitalization. These adaptations can be found in three fields – efficiency of services offered by the state to citizens, alternative providers of those services, and the structure of the democratic process.

Efficiency of services needs to increase through digital transformation, in order to address a combination of increased expectations and limited resources. The increased expectations derive partly from people’s experience of the large range of various services offered online, with a significant increase in convenience and ease of use compared to only a decade ago.

Alternative providers of social services may partner with public agencies that in turn may transform their role from producer to facilitator of services. Such service providers could also extend their offerings into a private virtual state eventually providing an alternative to traditional citizenship. However, since a PVS will depend on legislation in the nation-state where it is registered as a company, it will need to enter agreements with nation-states where its users live and work, and it will need to solve the issue of lacking democratic influence on the owners. Meanwhile, nation-states could offer services to PVSs through concepts such as State as a Platform, and nation-states could also make agreements with other nation-states for citizen-roaming, allowing their citizens to live and work in other countries.

The structure of the democratic process needs to adapt to decreased political involvement and to the risk of advanced manipulation of people’s opinions on one hand, and to the fact that geographic distances have become irrelevant on the other. Two possible models addressing those issues are deliberative polling and liquid democracy.

It is also found that the nation-state can reach increased strength through digitalization only if there is a balanced development of digital infrastructure and digital freedom.

Another question related to digitalization regards a future base of tax revenue in the case that AI and automation would lead to massive unemployment and to reduced tax revenues from income taxes. Although it is unclear whether automation will eliminate more jobs than it creates, an often-suggested alternative tax base is increased sales taxes, while alternative ways of distributing resources to citizens are negative income tax or unconditional basic income.

Nation-states are also considered to be threatened by a combination of supra-states, localism, and cosmopolitanism. However, since it will be hard for a supra-state to build a national identity strong enough to keep it united in crisis, the nation-state will likely have a role as a source of legitimacy for larger international structures, as mentioned before. Localism, on the other hand, will, in the long run, have limited opportunities to be successful due to the increasingly interconnected nature of the world. Specifically, with regard to ethnic homogeneity, it has been found that as a country becomes more Internet-connected, people’s attachment to the nation-state tends to derive more from the universal appeal of democracy than from the particular appeal of ethnicity. Regarding cosmopolitanism, it is not clear whether a growing cosmopolitan tendency could be seen as a threat to the nation-state, or even as a support. However, the increased number of involuntary cosmopolitans – international migrants – constitutes an increasing threat to nation-states, reinforced by improved communications technologies highlighting the injustice of the lottery of birth, and, eventually, the only way to remediate this threat is through a levelling of resources in the world.

Even though it is difficult to define a valid alternative that would threaten or disrupt the nation-state, the need for the nation-state to adapt as described above in order to remain relevant is urgent since digitalization is a process that arguably has just started and since the pace of change is accelerating. Not adapting is not an alternative since no entity can avoid adaptation to new conditions brought about by inventions without collapsing or being disrupted. However, the combination of the rule of law, the monopoly of coercive power, and the control of a territory gives significant power to the nation-state even in a digitalized world, which eventually comes down to the fact that all humans have a physical body and that the Internet is built on servers and cables that all have a physical location. This power gives nation-states substantial capabilities to delay any challenging development, providing extended time for implementing change.


Ethical considerations

The persons interviewed and named throughout this chapter (wherever applicable) have all provided their informed consent to appear in-text.



The author would like to thank the following persons for providing their views on this topic in personal interviews through April and May 2018: Anders Sandberg, Carl Heath, Darja Isaksson, Jan Nolin, Jan Söderqvist, Klas Danerlöv, Leif Edvinsson, Matthew Zook, and Stefan Fölster.



  1. The term “best adapted” is a more accurate, and less problematic, representation of the source material than the popularly used term “fittest.”
  2. Network effect: a phenomenon whereby a product or service gains additional value as more people use it.
  3. Swe: Sveriges kommuner och Landsting.
  4. Crypto-anarchists or crypto-libertarians refer to people who use cryptographic software striving for total or a high degree of anonymity, freedom of speech, and freedom to trade.



Akenine, D., 2014. 11 gram sanning [11 grams of truth]. Stockholm, Sweden: Hoi Förlag AB.

Anderson, B.R.O., 1991. Imagined communities: reflections on the origin and spread of nationalism. 2nd ed. New York, NY: Verso.

Bessette, J.M., 1980. Deliberative democracy: the majority principle in republican govern- ment. In: R.A. Goldwin and W.A. Schambra, eds. How democratic is the constitution? Washington, DC: American Enterprise Institute for Public Policy Research, pp. 102–16. Boffey, D., 2017. EU blacklist names 17 tax havens and puts Caymans and Jersey on notice. The Guardian. [online] Available at: < dec/05/eu-blacklist-names-17-tax-havens-and-puts-caymans-and-jersey-on-notice> [Accessed 5 Sep. 2019].

Cadwalladr, C. and Graham-Harrison, E., 2018. Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The Guardian. [online] Avail- able at: < ence-us-election> [Accessed 5 Sep. 2019].

Center for Deliberative Democracy, 2019. What is deliberative polling? [online] Center for Deliberative Democracy, Stanford University. Available at:< what-is-deliberative-polling> [Accessed 5 Sep. 2019].

Dalton, R., 2016. Why don’t Millennials vote? The Washington Post. [online] Available at: < vote> [Accessed 5 Sep. 2019].

Ferrini, L., 2012. Why is turnout at elections declining across the democratic world? [online] E-International Relations. Available at: < turnout-at-elections-declining-across-the-democratic-world> [Accessed 5 Sep. 2019].

Fishkin, J.S., 1988. The case for a national caucus: taking democracy seriously. Atlantic Monthly, Aug., pp. 16–18.

Hannerz, U., 2004. Cosmopolitanism. In: D. Nugent and J. Vincent, eds. A companion to the anthropology of politics. Oxford, UK: Blackwell Publishing Ltd, pp. 69–85.

Hechter, M., 2001. Containing nationalism. New York, NY: Oxford University Press.

Heller, N., 2017. Estonia, the digital republic. The New Yorker. [online] Available at: <> [Accessed 5 Sep. 2019].

Hutchinson, J., 2003. The past, present, and the future of the nation-state. Georgetown Journal of International Affairs, 4(1), pp. 5–12.

Kenney, M., Rouvinen, P. and Zysman, J., 2015. The digital disruption and its societal impacts. Journal of Industry, Competition and Trade, 15(1), pp. 1–4.

Kurzweil, R., 2001. The law of accelerating returns. [online] Kurzweil Accelerating Intel- ligence. Available at: <> [Accessed 5 Sep. 2019].

Lewan, M., 2013. E-röstning kan testas om 5 år [E-voting can be tested in 5 years]. [online] Ny Teknik. Available at: < tas-om-5-ar-6403323> [Accessed 5 Sep. 2019].

Lu, J. and Liu, X., 2018. The nation-state in the digital age: a contextual analysis in 33 countries. International Journal of Communication, 12, pp. 110–30.

Munro, K., 2018. China’s social credit system “could interfere in other nations’ sover- eignty.” The Guardian. [online] Available at: < jun/28/chinas-social-credit-system-could-interfere-in-other-nations-sovereignty> [Accessed 5 Sep. 2019].

Republic of Estonia, 2014. E-residency. [online] Available at: <> [Accessed 5 Sep. 2019].

Rogers, E.M., 1962. Diffusion of innovations. 1st ed. New York, NY: Free Press of Glencoe. Schwartzwald, J.L., 2017. The rise of the nation-state in Europe.Jefferson, NC: McFarland. Shahbaz, A., 2018. Freedom on the net 2018: the rise of digital authoritarianism. [online] Available at: < dom-net-2018/rise-digital-authoritarianism> [Accessed 5 Sep. 2019].

Thompson, G. and Hirst, P., 1995. Globalization and the future of the nation state. Econ- omy and Society, 24(3), pp. 408–42.

United Nations, 2017. International migration report 2017: highlights (ST/ESA/SER.A/404). New York, NY.

Wimmer, A., 2018. How nations come together. [online] Available at: <https://> [Accessed 5 Sep. 2019].

Winick, E., 2018. Every study we could find on what automation will do to jobs, in one chart. [online] MIT Technology Review. Available at: <www.technologyreview. com/s/610005/every-study-we-could-find-on-what-automation-will-do-to-jobs-in-one- chart> [Accessed 5 Sep. 2019].

Wright, N., 2018. How artificial intelligence will reshape the global order. [online] For- eign Affairs. Available at: < artificial-intelligence-will-reshape-global-order> [Accessed 5 Sep. 2019].

Eight + one megatrends you need to check out in 2018

A new year has just begun, with phenomena and trends that were mere ideas just a few years ago. Change will never be as slow as today, and therefore it is important to look up to see the big picture. Here are eight megatrends, plus one upcoming hot topic, you need to keep an eye on.

1. AI

If digitalization is what many have been wrestling with in recent years, artificial intelligence could be called digitalization 2.0. AI systems are not yet smart as humans, but once information flows and processes have become digital, AI can help with three things—optimizing and streamlining entire businesses, prediction of everything from market reactions to legal decisions, and personalization of services and products to individuals. The challenges are many, both ethical and practical, but AI will change all industries, from manufacturing, finance, and law to healthcare and retail. Start with a small project with direct business benefits.

2. Automation

Robots have long been used for automation in manufacturing, but with AI, automation now enters offices and takes over routine tasks in all possible areas—from sales and customer support to administration and financial advice. Everything that can be automated will be automated, and this also allows people to work with more interesting and stimulating tasks.

3. Internet of Things

More and more products will be connected. The networks of connected things are for AI what the human senses are for the brain—they collect information that enables new, smart, customized services to be delivered. The devices can also talk to each other, on their own, and for example order service or spare parts. Or warn that they are about to break. Through this increased efficiency, the trend is that ownership is shifted towards access, just like in the music industry where we now pay for access to music instead of owning discs and music files.

4. Devices that talk to you

Computers are now learning one of the most human things of all—our spoken language. Soon, we will get used to talking to our devices, from smart speakers like Amazon’s Echo or Google’s Home to digital customer support assistants who even understand if we’re angry, annoyed or happy. For companies in all industries, it will be crucial to follow the development of automated customer support based on natural language processing. A significant advantage is that it will become easier for everyone—elderly people, not the least—to use digital services.

5. Blockchain

The Blockchain is what everyone talks about but few understand. Don’t worry—it’s not rocket science: A blockchain is a kind of database, with multiple identical copies, that everyone or many people manage together. This entails two things:

1. A blockchain has got superior security since it cannot be manipulated on one single computer or server.

2. It replaces the need for an independent third party, such as a bank, a lawyer or a supervisory body.

There are two other important things distinguish the blockchain—it can contain smart contracts, a kind of application that automatically performs agreed actions, and it can manage both money and other assets in the same system so that transactions of assets and money are carried out in one single process. Altogether, this opens opportunities for the blockchain to dramatically change everything from financial systems and real estate business to consumer services. Many experiments are now being set up, and it is a good idea to start testing blockchain-based applications on a small scale.

6. Self-driving cars

By 2018, the first cars that will let you take your hands off the steering wheel for a long while will be sold, and in a few years, cars will be fully autonomous. The upcoming change will be enormous, not only for transportation. One potential outcome is that the second-hand value of your car may fall to zero when cheap transportation services with electric self-driving cars become widespread. Another is that the price of downtown hotel rooms and short time offices can fall when self-driving hotel rooms and office spaces can be positioned where you want, and get you where you want, whenever you want. The combination of electric power, connected cars, and self-driving cars will lead to a perfect storm and a brand new market—the passenger economy. And the car industry will be tremendously challenged when demand for new cars will start to decrease heavily.

7. Virtual reality

Virtual reality, and augmented or mixed reality—where computer images are “stuck” to physical objects in the real world—are still in their infancy. But the technology is intensely fermenting, from Facebook’s VR mask Oculus Rift to Microsoft’s AR mask Hololens and not least, the intriguing company Magic Leap with its AR system Magic Leap One. Applications range from virtual meetings and tourism to visualization of building projects and tools for service technicians. Definitely worth keeping an eye on.

8. IT security

IT security has always been a race against increasingly smart attacks. Soon, or maybe already, there are AI-based systems that are looking for new weaknesses on their own and that perform attacks without human intervention. Today, the average time to detect a data breach is over six months, and IT security is therefore as much about finding hidden intruders as preventing attacks. The threat is directed towards companies and organizations as well as society’s infrastructure. There is no simple solution—the takeaway is that IT security continues to be increasingly important.

+ One upcoming hot topic—Disruptive Energy

Although yet little talked about, two disruptive and abundant carbon-free energy sources, potentially helping to solve the climate crisis and much more, are approaching commercialization. One is called LENR and could be described as radiation-free nuclear power, suitable for everything from households and vehicles to industry. The other one is hydrogen, producing only pure water when burned. Hydrogen is now facing new possibilities through a new carbon-free process for producing abundant amounts of the gas. This will be discussed at the New Energy World Symposium in Stockholm, Sweden, June 18-19, 2018. Don’t miss it!

So what should I bet on?

As everything is getting increasingly digitalized, automated and streamlined, four human qualities will increase in value: Creativity, the ability to convince and motivate other people, empathy, and fine dexterity. Unique human experiences based on these four abilities—whether it’s a different shopping experience, a travel experience, a visit to a restaurant, or a business encounter—will always be requested. These four areas are where you can look for new opportunities in 2018. Equally valuable is our human ability to see large perspectives, and therefore, as the pace of change increases, never forget to look up, and around!

PS. Want to learn more of one or more of these megatrends? Don’t hesitate to contact me!

Let’s talk about TRUST—in finance and in business

[Presentation at a launch event of a new book on FinTech in Sweden, see below].

Trust is a funny thing. It’s a very human condition, which most people would consider absolutely essential in financial services and in business. Yet, we don’t reflect much on what trust really is. Most often it’s a gut feeling, not very well defined.

If you look it up in dictionaries you will find definitions such as ‘firm belief in the reliability, truth, or ability of someone or something.’

Note the word belief.

And here’s the first take-away—things that we’re using regularly without knowing well what it is and how it works, are likely to be unexpectedly changed or disrupted by digitalisation since we don’t analyze the mechanisms in play.

I started to look at the concept of trust for a contribution to a research project at the Stockholm School of Economics which has resulted in the upcoming 23-author book ‘FinTech: Accounts of Disruption from Sweden and Beyond’, and this is a short summary of the presentation I recently gave at a launch event in Stockholm.

First—what is trust:

Then, in interviews with ten individuals (see list at the bottom), I investigated how FinTech companies and people in the financial industry build trust and relate to trust:

And here are some main findings:

  1. Markers. Fintech startups build trust by using markers such as well-known people on the board or among the investors, tests by external experts, media reports, using established tools such as Bank-ID, getting an ISO certification, a banking license (makes you less global!) and through compliance. Other markers that build trust over time are reliability and good functionality. Note that almost nothing of this relates to whether the service is objectively trustworthy or not. But the markers give users a sense of predictability.
  2. Social. More than markers, the interviewees considered the users’ friends and social networks to be important today. People trust their friends, or essentially they know that if anyone among their friends’ friends’ friends would notice something unreliable, this information would spread very quickly. Not having heard anything negative, it’s probably trustworthy. You could also argue that this regards millennials more, being more used to interacting and giving away data on the Internet. Or as one of the interviewees put it: They are ‘trusting online, or mistrusting, as the case may be, or being wary but in a different way than people who are pre-internet are wary.’ This is in line with digitalisation: The Internet enables peer-to-peer communication across the world, as opposed to hierarchies we have built for thousands of years.
  3. Two kinds of trust. You can have trust regarding security, and regarding whether a financial institution will give you good advice and manage your funds in a good way. Some interviewees thought that traditional banks are trusted for security but that users’ trust in banks with regard to managing funds is decreasing. Some also thought that millennials have more trust in Internet giants such as Google, Facebook, Apple, and Spotify, than they have in banks, and that these will be more successful launching new financial services, such as direct payments through messaging services, launched by Facebook among others, and recently announced by Apple. This was considered to be an opportunity for FinTech companies.
  4. Trust is two-way. For FinTech companies, it’s essential not only to be trusted but also to have trust in their customers, to some extent by regulation e.g. regarding KYC rules (Know Your Customer). The Swedish bank-owned digital system for identification, Bank-ID, has been a door-opener for digital financial services, making it possible to identify people remotely. However, Bank-ID has also been criticized, partly for being privately owned by major banks with a potential self-interest of not being challenged, partly for lacking security (more about this in our report).
  5. Digital trust—Blockchain. As mentioned above, digital technology is efficient for prediction. One example is the startup Hiveonline, building a system for automatic credit rating, as opposed to human judgment. The system collects data on interactions between parties, and through an algorithm, it calculates a credit score which is not influenced by judgment and supposedly holds a higher quality with regard to prediction about how individuals will behave. In other words, trust. (The collected data is stored in a Blockchain—the distributed ledger system which is the basis of Bitcoin and which could be considered to provide distributed trust, since it eliminates the need for a trusted partner which controls and guarantees transactions. Note that Blockchain technology is not yet considered mature and some believe that it can not be used effectively for other applications than cryptocurrencies such as Bitcoin).
  6. Possible effects of automated credit ratings: In developed economies: a rebalance between large businesses and micro businesses. In developing economies: two billion unbanked, of which 1.5 billion people without even ID, birth certificates or other documents, could get a credible credit score and go to the bank to get a loan.
  7. Culture. Discussing global systems for digital trust, it’s important to remember cultural differences in people’s perception of trust. One such main difference is the scale between cognitive and affective trust, where cognitive trust is based on the confidence you feel in someone’s accomplishments, skills, and reliability, whereas affective trust arises from feelings of emotional closeness, empathy, or friendship. It turns out that in business contexts, cognitive trust is predominant in the US, in Australia, and in northern Europe, whereas affective trust is more important in Asia, in Africa, in the Middle East, in Mediterranean countries, and in South America. Possibly, affective trust is more important in countries where the legal system is less effective. US businesses, on the other hand, rely much on written agreements and consider mixing cognitive and affective trust unprofessional. Yet, proposing a written agreement in a country where affective trust is predominant could even be seen as offesive—you don’t trust me? All these cultural differences are important to recognize and relate to in a potential global system for digital trust, as in all other contexts of digitalization, where human aspects are half of the equation.

    Erin Meyer—The Culture Map

  8. The Trustnet—a possible evolution of the Internet. This would be a part of the Internet which you can only access if you identify yourself with a global, non-govermentally controlled ID system, e.g. based on Blockchain technology. In this way, everyone on the Trustnet would see whom everyone else is—even governments or companies wanting to do surveillance would need to identify themselves. In this way, surveillance becomes symmetric, as in a small village where everyone knows everything about everyone, as opposed to asymmetric as of today when we don’t know who is watching us. You would then see if people would ‘vote with their feet,’ and for different activities move between the Trustnet (where everyone is identified and visible), the Internet (where you can be anonymous and visible), and the ‘Darknet‘ (where you can be anonymous and invisible).

The book ‘FinTech: Accounts of Disruption from Sweden and Beyond’, expected to be released later this year, is part of the three-year research project The Internet and Its Direct and Indirect Effects on Innovation and the Swedish Economy—led by Professor Robin Teigland at the Stockholm School of Economics, with funding from IIS, The Internet Foundation In Sweden.


List of persons interviewed:

Cecilia Skingsley, Deputy Governor of Sweden’s central bank, the Riksbank, Henrik Rosvall, CEO of the savings app Dreams, Johan Lundberg, Co-founder at the FinTech focused VC-firm NFT Ventures, Daniel Kjellén, CEO at the integrated bank information app Tink, Ulf Ahrner, CEO at the investment digital advising company Primepilot, Danny Aerts, CEO at IIS (Internetstiftelsen), Lan-Ling Fredell, Head of Operations at Stockholm FinTech Hub, Sofie Blakstad, CEO and Founder at the financial trust platform Hiveonline, Frank Schuil, CEO and Co-founder at the Bitcoin-focused startup Safello, and Jonathan Jogenfors, researcher at the University of Linköping.

Note: I also do seminars and workshops on digital transformation, and if you want a deeper look at digitalization, please don’t hesitate to contact me.

Seven Things You Should Know to Be a Digital Winner

It doesn’t matter whether you’re a doctor, a bus driver, a lawyer, an economist, an industry worker, a salesman, a trader, a teacher, an engineer or a customer service agent—your job won’t exist in a decade or two. Not in the form it exists today.

The reason is that machines are getting better and better at doing what humans do, in almost all fields. And they’re making progress quickly—faster than many would have believed. Google’s AI (artificial intelligence) based system AlphaGo recently beat the world’s best player in the world’s most complex board game—Go—ten years earlier than expected.

AlphaGo now stops playing games and will instead focus on solving world-class problems—finding new treatments for diseases, dramatically decrease energy consumption, and inventing new revolutionary materials. In other words, it’s a fairly smart machine, and it’s self-learning too!

Within all professions I mentioned initially, and in others too, there are already advanced AI based systems taking over more and more of people’s work tasks.

So now you’re asking, how will all this affect me and my job?

The bad news is that you won’t escape.

The good news, however, is that there are tons of opportunities if you develop an understanding of the ongoing change and if you build a strategy on that understanding.

To help you on the way, I will share some insights, built on my experience and focus on future, technology and digital change for individuals, organizations, and society.

  1. Start discussing digitalization and automation in your organization, in particular with the top management, which has to understand its importance and be ready to take action. If the top management doesn’t show this understanding, consider looking for a new job. Your company or organization will be in trouble.
  2. Get rid of boring work. Ask yourself which of your daily tasks you wouldn’t mind at all if a machine would do for you. Most repetitive work is boring, and the good thing is that machines are particularly skilled at repetitive tasks. Let them do it and free your time for work that you find more interesting.
  3. Be curious! Start investigating digital and AI-based tools in your field, and learn about them. Remember that such tools will help people to work faster and more efficiently through automation—be it lawyers, researchers, doctors, or analysts—and that those who are learning ahead of others will be winners.
  4. Collaborate with machines. No, machines won’t steal your job, at least not for a good while. In contrast, collaboration with machines is a winning formula. The best chess teams are humans together with computers, and that will be true for most other areas. So investigate how you can collaborate and build a team with AI-based systems, and even learn from them.
  5. Become better than machines. Four main areas will be the most difficult for machines to master: Creativity, ability to convince and motivate other people, empathy, and fine dexterity. Find out which of your work tasks are related to any of those areas and try to develop them further.
  6. Get more social. Remember that the Internet helps people connect, peer-to-peer, across the world, in contrast to all hierarchies we have built through thousands of years. This goes also for machines, but people are more social (so far). Use and develop this opportunity in your work, and reinforce your informal networks, not only in your own industry. Also—always try to share your insights and your journey with others. Together you are stronger!
  7. Pay attention to digital strategies. In the long run, digitalization will require a transformation of everything at work—from organization, development, and sales to the core business model. A digital strategy that is a separate part of your organization’s operations might be a good and careful start, but over time it’s not enough. Contribute to digitalization becoming a part of everything your organization does in a longer perspective—this will increase the chances that your own job can evolve.

In other words—your job will disappear, but digitalization, automation, and AI are bringing great opportunities to create a new one. And the earlier you start investigating these opportunities, the bigger the chances that you will be successful.

Note: I also do seminars and workshops on digital transformation, and if you want a deeper look at digitalization, please don’t hesitate to contact me.

Nine Things You Should Know to Be Smart About Driverless Cars

Just a few years ago, not many people realized what digitalization of transportation would be. Now, autonomous cars are the talk of the town, and carmakers, tech giants, and start-ups are racing to stay ahead in the mercilessly competitive transformation of the mobility industry.

It could turn out to be the most profound of all digitally driven transformations. Some call it a perfect storm, because of the fast convergence between autonomous driving, electric vehicles, and connected vehicles. The convergence is explosive and will result in so many secondary effects that it’s hard to even imagine them.

Such effects will hit you too, which might get you worried. The good news, however, is that you’ve got great business opportunities ahead if you develop an understanding of these effects and build a strategy on that understanding.

To help you on the way, I will share some insights, built on my experience and focus on future, technology and digital change for individuals, organizations, and society.

Here we go:

  1. Driverless cars will save lives. Over 3,000 people get killed every day on public roads in the world—more than two people every minute (not to mention those injured). Few diseases kill more people. Autonomous cars will be safer. They never get distracted, irritated, emotional, tired or drunk. The funny thing, however, is that even if all cars were autonomous, and maybe 1,000 people were killed every day, we would not be happy. Somehow we excuse people but not machines. Be ready for this debate. Eventually, though, humans will probably not be allowed to drive cars, except for at remote locations, and in an emergency, at low speed.
  2. Jobs will be displaced big time. Autonomous cars will lead to a substantial loss of jobs. Primarily drivers—both of cars, buses, and trucks. Goldman Sachs expects the decline to be 25,000 jobs a month in the US only when vehicle saturation peaks some years ahead. Since most vehicles will be electric (see below), a secondary effect will be the loss of jobs at gas stations, garages (electric vehicles require significantly less maintenance), spare parts providers, the oil industry and more. If you have such a job, let machines do the boring tasks and focus instead on things that machines are not good at. Essentially this regards four areas—creativity, ability to motivate and convince other people, empathy, and fine dexterity. Find out which of your daily tasks are related to one of those areas and develop them further. Bus drivers could e.g. shift towards bus hostesses.
  3. Fossil fuel cars will be displaced big time. According to Stanford University economist Tony Seba, no more petrol or diesel cars, buses, or trucks will be sold anywhere in the world within eight years. A twin ‘death spiral’ will hit those vehicles—since electric vehicles are ten times cheaper to maintain than cars that run on fossil fuels and have a near-zero marginal cost of fuel, people will switch, making it harder to find a petrol station, spares, or anybody to fix an internal combustion engine. Sell your petrol or diesel car before it’s too late. One fundamental issue, however, will be how to charge massive amounts of electric vehicles. Another issue is the environmental impact of battery production. Therefore expect the need for new energy sources to provide electricity on-board—consider for example LENR.
  4. Car-owning will change. As with most digital transformations, autonomous cars will push a shift from owning to accessing. Most people will access mobility as a service, potentially at a fixed cost a month. As for owners, there are four good owning cases: 1. People who want to own a self-driving car, for convenience and status. 2. Taxi or transportation firms such as Uber, Lyft, Otto and others. 3. Car-sharing firms such as Zipcar. 4. Cars will own themselves, doing business, occasionally driving to motor vehicle inspection, repair garage etc. And if they get rich they will buy another car and become two. Or more. Get ready to choose your owning or accessing strategy.
  5. Policymakers will have to regulate. Driverless cars will require fewer parking spaces in cities, but if unregulated, they will lead to more traffic since it will be easier and cheaper to use transportation. People could, for example, choose to let the car circulate when shopping in crowded cities, and soon streets will be congested. The risk for terrorists using autonomous vehicles for attacks is obvious too. This, and more requires wise regulation.
  6. Real estate values will be affected. Locations that are a little too distant for people to commute by car today will be more attractive since you will be able to work when commuting, instead of having to drive the car. Check for such real estate opportunities, before the value increases.
  7. Car ethics will be hot. Autonomous cars will have to make decisions, and such decisions need to be certified. However, don’t expect the decisions to be clearly programmed. Autonomous cars are self-learning and will take decisions in a way similar to humans, although many decisions will be taken together with other nearby cars. The upside is that cars can learn, become better, and immediately share their knowledge with millions of other vehicles. The challenge will be how to certify such self-learning vehicles—maybe with a driving license test? Be sure to understand this as a user.
  8. Privacy to the next level. If privacy on the Internet is already a complex matter, privacy for autonomous car users will be the advanced level. An autonomous car will have AI inside to provide help, service, answers, and entertainment, while also communicating with other vehicles. This will expose your activities more than normal Internet usage does. On top of that, autonomous vehicles will know where you need or like to go, with whom and when. Think well about who can access this data.
  9. Car security will by fundamental. With so much responsibility in the hands of the car itself, cyber security for cars will be of fundamental importance. No details need to be explained. Make sure the car provider is top notch on this point.

These nine insights might help you to prepare for choices you will need to make with regard to driverless vehicles.

But also remember that huge business opportunities will emerge for digital products or services—from finance, law, entertainment or any other field—that can be mixed into the autonomous and electric mobility industry from this perspective. Start investigating today!


Note: I also do seminars and workshops on digital transformation, and if you want a deeper look at digitalization, please don’t hesitate to contact me.

Eight Things Retailers Should Know to Be Digital Winners

The digital transformation of retail has been going on for some years now, and as a retailer, it’s easy to become worried over how to remain competitive in such a fast changing environment.

The bad news is that this change still has a long way to go. The good news, however, is that you shouldn’t worry if you develop an understanding of the change and if you build a strategy on that understanding.

To help you on the way, I will share some advice, built on my experience and my focus on digital change for individuals, organizations, and society.

Here we go:

  1. Investigate your business model. Digital goods can be copied infinitely, and non-digital goods can be shared efficiently on digital platforms. And you reach the world with one click. Generally, this combination is what breaks traditional business models and also creates a push from owning to accessing, as for example in the music industry. Investigate to what extent your offerings are purely digital, or shareable on digital platforms, and play with ideas on business models taking advantage of the digital economy. Maybe you could design new digital services to go with your existing products or mix your digital offerings with digital content from other fields. Don’t stop. Ever.
  2. Be local (or niche). E-commerce is continuously growing, but essentially there are four ways to stay ahead online: Lowest price (hard to compete with Amazon and Aliexpress), Well-known Brand (hard to compete with Adidas and Nike etc), Niche Products (yes, you might sell sheep shears from Sardinia), Local Presence (yes, you might have a local store, which is more common than having niche products).
  3. Become Phygital. Local is good, but in a digital world, you also need online and mobile presence. And once you have both it’s important to integrate your Physical store with your Digital store, seamlessly—into Phygital: One single cash system, one single stock system, one single sales system, one single marketing system etc. Many providers offer these solutions (some call this Omnichannel, but that doesn’t underline integration, which is essential).
  4. Be social locally. The advantage of local presence is that it’s easier to meet people—managing returns and being able to offer additional items at the same time, building a brand experience in your store where people want to hang out, offering customers a pick-up location for goods ordered online, and more. This is why even Amazon is experimenting with local stores.
  5. Be social online. Naturally, you need to take advantage of all the opportunities offered on the Internet to be digitally social. Listen to advice from experts on social networks and always aim at creating an online presence that people might want to share with each other. Here’s where you can build digital momentum.
  6. Be super transparent. Customers visiting your physical or digital store should always know exactly what they can find in each of them, at any moment. Your physical store might be small, and could yet offer digital ways of discovering a large stock of products. Customers should be able to place orders in both stores and get deliveries at home or in the physical store, independently of where the order is placed.
  7. Develop logistics. Logistics is key for retail, and logistic services for phygital commerce is evolving. Investigate what new opportunities are offered, for example integrating delivery with nearby stores or having stock delivered to you in real time.
  8. Be predictive. With big data analysis, it’s possible for large online businesses to know what customers will order before they order it. Today, Predictive Analytics is also available for small businesses. The two top benefits are:
  • Customer Retention—for example predicting when existing customers will come back and buy more of something. Reminding them before they think of it will make them happy.
  • Demand Forecasting—tracking inventory, discovering trends and forecasting demand at different times can significantly help improve the top line.

In other words—digital is still a great opportunity in retail, and the earlier you investigate its opportunities further and keep track of new trends, the bigger the chance that you will be a digital winner.

Note: I also do seminars and workshops on digital transformation, and if you want a deeper look at digitalization, please don’t hesitate to contact me.

Seven Things Lawyers Should Know to Be Digital Winners

Many lawyers and law experts worry about how their professional opportunities are changing with digitalization, observing how some daily tasks are already being taken over by digital automation and artificial intelligence, AI.

And yes, the ongoing change in the legal sector is significant, and moreover, it only just started. The good news, however, is that you shouldn’t worry if you develop an understanding of this change and if you build a personal strategy based on that understanding.

To give you some help on that journey, I will share some important advice, grown from my daily focus on technology and digitalization and its impact on businesses and on society through many years.

Before getting to the advice (jump down if you’re in a hurry), let me just highlight three core pieces of the digitalization puzzle that many people struggle to put together.

First of all, remember that digitalization has become a hyped term, meaning all and everything. And instead of scratching the surface, looking at apps and social networks strategies, try to focus on what really makes digitalization a powerful driving force for change.

These three aspects of digitalization are fundamental:

  1. The cost of a digital copy is basically zero, and you reach the whole world with one click. This means that once you have digitalized content, tools, processes, products, services or methods, you can spread them over the world at a very low cost. This is what makes traditional business models break, pushing a shift from owning to accessing, among other things. Think of how the music industry went from owning discs to accessing music, but also of any kind of digital tool in the legal sector.
  2. People—it is people using, investigating, and taking advantage of the low cost of digital material and the possibilities of reaching the world, that really makes digitalization explode and disrupt many industries. And since the Internet is not hierarchical, people are networking, connecting with individuals all across the world, changing the way we organize, collaborate, build things, provide funding, distribute news, recruit collaborators and much more. And people are already building network inspired legal services, such as Swedish Lawline.
  3. Algorithms and AI. Algorithms are what let Amazon effectively tip you off with ‘Customers who bought this item also bought…’ while AI, which is now evolving rapidly, adds a why-dimension to that kind of advice, making it possible to answer more complex questions. Both can be learning, making them richer, more accurate or individually adapted over time.

In the legal space, the use of algorithms and AI is spreading quickly (remember that they are spreading and evolving because of point 1 and 2 above).

Key areas are:

  • Research tools—such as ROSS, built on IBM’s system Watson that won over humans in Jeopardy in 2011. ROSS answers advanced legal questions in plain language.
  • Contract Review—AI-based tools that assist attorneys in analyzing contracts and other legal documents, pulling out key points of interest. One example is Extract by the UK based company RAVN.
  • Electronic Discovery—tools for technology-assisted review, aka TAR, helping attorneys with the hugely time-consuming work of going through documents, searching for relevant facts, cases, relationships and more.
  • Prediction—AI-based tools that effectively predict the outcome of a case, court decisions and more, based on the currently available information. These tools help to assess whether it’s worth pursuing a case or not.

On top of these, there are other areas, such as tools for assisting courts with certain tasks. Meanwhile, important research is being done to verify that bias is not integrated into such tools—the systems are prone to bias since they learn from data that might be biased.

As you can see, digital technology is starting to transform the legal sector, and yet this is only the beginning. So, as a legal professional, how can you prepare for, and take advantage of this change? Here we go.

Seven digital tips to lawyers:

  1. Start discussing digitalization in you organization, in particular with the top management, which has to understand its importance and be ready to take action. If the top management doesn’t show this understanding, consider searching for a new job. Your company or organization will be in trouble.
  2. Start investigating digital and AI based legal tools on the market, and learn about them. Remember that such tools will help legal professionals to work faster and more efficiently through automation, and those who are learning ahead of others will be winners.
  3. Collaborate with machines. No, machines won’t steal your job, at least not for a good while. In contrast, collaboration with machines is a winning formula. The best chess teams are humans together with computers, and that will be true in most other areas. So investigate how you can collaborate and build a team with legal AI-based systems.
  4. Get rid of boring work. Ask yourself which of your daily tasks you wouldn’t mind at all if a machine would do for you. Most repetitive work is boring, and the good thing is that machines are particularly good at repetitive tasks. Let them do it and free your time for work you find more interesting.
  5. Remain better than machines. Four main areas will be the most difficult for machines to master: Creativity, ability to convince and motivate other people, empathy, and fine dexterity. Find out which of your work tasks are related to any of those areas and try to develop them further.
  6. Get more social. Remember that the Internet helps people connect, peer-to-peer, across the world. Use and develop this opportunity in your work, and reinforce your informal networks, not only in the legal sector.
  7. Beware of digital strategies. Don’t contribute to building digital strategies that risk remaining a separate component of an organization’s operations. Essentially, digital technology is just a new tool and as any other technology it changes the conditions for what you’re doing. What’s particular with digital technology is that it’s a hugely powerful driver for change and it requires an adaptation to the new conditions—of everything from organization and sales to the main business model. Any digital strategy including less is not enough.

In other words—digital is an opportunity in the legal space, and the earlier you investigate its opportunities, the bigger the chance that you will be a digital winner.


Note: I also do seminars and workshops on digital transformation, and if you want a deeper look at digitalization, please don’t hesitate to contact me.

Also, feel free to register for the legal innovation event VQ Forum, on October 19, 2017, where I will give a keynote on ‘digitalization—a threat or an opportunity.’

Been a bit busy—will soon be back

For various reasons—mostly positive—I have been a bit busy the last year and not so active on this blog.

In particular, I have dedicated a lot of my time to following the ever increasing flow of interesting news on technology and its implications on society, organisations and individuals, continuously sharing much of this on my Twitter feed, on my Facebook page and on LinkedIn.

However, I plan to be back soon with some fresh posts here on the blog, so please stay tuned.

Thanks for visiting.

Announcing the New Energy World Symposium

News-blue-text-date 2018Note: The New Energy World Symposium has been re-scheduled to June 18-19, 2018. Read more here. 


Today I’m announcing the New Energy World Symposium that will hold its first session on June 21, 2016, in Stockholm Sweden.

The conference will focus on the disruptive consequences of a new cheap, clean, carbon-free and abundant energy source—LENR or Cold Fusion—that may literally change the world, promising Planet Earth clean water, zero-emission vehicles with unlimited mileage, a solution to the climate crisis and much more.

I’m particularly proud to announce a few of the renowned speakers who together with me believe that it’s high time to draw global attention to this subject.

Read more in this blog post at the symposium’s main website, where you will also find further information.

Give your organisation an injection of inspiration on digitisation

The AI based cognitive system Amelia by IP Soft is a digital assistant that communicates and learns through natural language—an example of technology that already is far more than just a tool, pushing the change that technology brings to our world.

The AI based cognitive system Amelia by IP Soft is a digital assistant that communicates and learns through natural language—an example of technology that already is far more than just a tool, pushing the change that technology brings to our world.

As I give talks on future and technology—or rather before I give my talks—I often meet people saying they feel that we’re entering a time of big changes, and yet they seem to have difficulties in defining exactly what this change is. They might refer to the fast uptake of smartphones, tablets and social media, but they probably realise that this is not the most important aspect.

I believe that there are two reasons for the difficulty to see the real power in the change technology brings—a change that will eventually affect every industry and every business, and every individual too.

One reason is that most people still consider technology as a tool, that does
things we want it to do, be it a wrench or a computer. They don’t realise that technology today is so penetrating and fast developing that it has become a driving force for change in itself, more than ever before—and also more irresistible than other technologies that brought earlier fundamental changes to society such as the industrialisation, since today’s technology is so ubiquitous and easily available.

The other reason is that people tend to be amazed at what technology can do today, without taking into account further development, which is also accelerating. This brings an illusion that things will be as they are for some time. But change will never be as slow again. 

This is also why it’s so challenging for most companies and organisations to start adapting to digitisation. Often such projects become action plans involving new strategies for some apps or for social media, but in the long run this is just scratching the surface of the digital revolution.

Until everyone in the organisation, employees and top management, has understood the explosive and unstoppable power of digital technology, it’s unlikely that that strategies and action plans will be profound enough to keep up with the accelerating pace of change.

That’s why it’s so satisfying bringing up the fundamental aspects of digitisation in seminars, describing not just what technology is, but what it does and what it means. It gives everyone an injection of inspiration on digitisation—how it works, what it means—and on all revolutionary new opportunities that will affect—and benefit—them. That is the real starting point for a valid action plan, the first steps towards new ideas and new business opportunities—faster, more efficient and more profitable.

I note that people are amazed when I describe the power of the accelerating pace of technology development, with Moore’s law describing the doubling of power of information technology every second year, being live and kicking still after 50 years—not to mention that a similar ‘law’ has been valid for billions of years. Only that the real explosion is starting right now, with strong artificial intelligence—machine intelligence at the level of humans—being realised within only 15 or 20 years.

I see that it’s useful for people to understand the seven mechanisms that make digitisation so disruptive—from zero cost of a copy changing fundamental economic models, to the little explored potential in mixing digital content between industries and businesses to create completely new products and services.

And it’s effective to get an insight in the different ways that digitisation is about to transform a series of industries, just like the music industry was disrupted—from education and healthcare to finance, law and transportation. Not to mention Industry 4.0—the new data based industry phase, following disruption from steam, electricity and electronics.

In the end however, it’s easy to get trapped and almost spellbound by all the new opportunities technology offers, and therefore also easy to forget who we are—humans, with dreams, visions, passions and values.

This might be the most important part to realise when starting to adapt to digitisation and accelerating technologies. Because these are aspects of us that we have developed over millions of years, aspects that can help us make the best out of technology driven change—something that moves in harmony and not in conflict with us, both as individuals and as a society.

In the end, this will also be hugely important when we in the coming two decades will have to adapt to a world where jobs are disappearing, being automated and performed by intelligent systems and robots. Our ideas about work and about what to do with our lives will then have to change fundamentally—a change that might bring amazing opportunities if society adapts in time, and if we as individuals are ready to grasp and understand these opportunities, remembering who we are as human beings. •••

– – – –

This post was originally posted on Linkedin.

One step closer to long distance private drones


PhD student Andrew Barry from MIT’s Computer Science and Artificial Intelligence Lab has developed a detect-and-avoid system that lets drones fly autonomously through a tree-filled field at a speed close to 30 mph.

The system operates at 120 frames per second and is running 20 times faster than existing software.

In Sweden, flying out of sight with drones is not permitted with ordinary drones, even though many people do, using what is called First Person View, FPV, which means that the pilot looks through video glasses showing the image from a camera on the drone. With a good radio link it’s possible to fly tens of miles away.

What’s required for out-of-sight flight, according to Swedish rules, is an on-board detect-and-avoid system, and the system developed by Barry could be one step towards cheap and commercially available such systems.

Flying much farther away would then also be possible. It has been shown that drones can be controlled using signals through commercial cell phone networks, which means that you could basically fly a private drone on the other side of the world.

Former Skype founders rethink local delivery


Former Skype founders Ahti Heinla and Janus Friis aim at reshaping local small scale delivery with the company Starship Technologies and this small autonomous device, moving at four miles per hour. The plan is to offer local delivery from the grocery store or retailers, or last-mile delivery of goods arriving to local hubs with ordinary carriers, at prices 10 to 15 times less than the cost of current last-mile delivery alternatives. Local point to point delivery is also an option.

Shoppers can follow the robot’s position through an app which is also used to unlock the lid of the robot.

The first pilot services are planned for UK, US and some other countries in 2016.

Digitisation of transportation is already coming through driverless cars, bringing big changes in car ownership, parking, traffic optimisation, city planning, logistics, taxi services and much more. Starship Technologies’ robot could add to that process, being much cheaper than a larger vehicle, and yet much easier to handle, more energy efficient and with bigger load capacity than drones.

Swedish scientists claim LENR explanation break-through


Rickard Lundin, credit: Torbjörn Lövgren, IRF.

Essentially no new physics but a little-known physical effect describing matter’s interaction with electromagnetic fields — ponderomotive Miller forces — would explain energy release and isotopic changes in LENR. This is what Rickard Lundin and Hans Lidgren, two top level Swedish scientists, claim, describing their theory in a paper called Nuclear Spallation and Neutron Capture Induced by Ponderomotive Wave Forcing (full length paper here) that will be presented on Friday, October 16, at the 11th International Workshop on Anomalies in 
Hydrogen Loaded Metals, hosted by Airbus in Toulouse, France.

From the conclusions of the paper:

“This report demonstrates, theoretically and experimentally, that nuclear energy production may be accommodated in rather small units, operating at modest temperatures (≈900-2000°C), and produce sustainable power output in the range 1 – 10 kW – at minute fuel consumption (few grams per year). (…) The magnitude of the power output, delivered from a miniscule amount of fuel, demonstrates that it is a nuclear process with great potentials. Properly utilized the process has potentials of becoming an unlimited and sustainable energy source, producing essentially no long-lived radioactive waste.”


Jobs will go away… but not work!

With digitization and automation jobs will disappear, but we’ll continue to work anyway. That is what several experts I have been talking to believe. They also think that distribution of wealth in society could become the biggest challenge ahead.

Anna Felländer, foto:Swedbank.

Anna Felländer, photo: Swedbank.

None of the experts I have been talking to about jobs and the future is opposed to the picture that a large part of today’s jobs will be automated.

“Everyone understands that we are replacing people with robots in factories that are being automated. What is still not widely recognized is that this also applies to white collar jobs now — those jobs that absorbed much labor when we started the transition to manufacturing in factories, and also what made this transition relatively painless. The question is whether there is a third sector that can absorb those who are losing white collar jobs,” says Gunnar Karlsson, professor of telecom traffic systems at the Royal Institute of Technology in Stockholm, Sweden.

So the crucial question is whether new jobs will be created when old ones disappear. Many hope they will be created in innovative star-ups, but Swedish technology weekly Ny Teknik concluded last fall that 25 of Sweden’s most acclaimed start-up companies together had created 3,700 jobs, while 5,000 jobs had disappeared in five years, only in the Volvo Group.

Another hope, however, lies in increased demand for local services as a result of higher income for those who still have jobs, and according to a report by the Foundation for Strategic Research (Stiftelsen för strategisk forskning) this seems partly to be true (see below).

Robin Teigland, photo: Jörgen Appelgren.

Robin Teigland, photo: Jörgen Appelgren.

Anna Felländer, chief economist at Swedbank, also sees a trend moving in that direction, where the most important thing, according to her, will be to make it easier for innovative companies and for the self-employed.

“40 percent of the US workforce is self-employed and in five years it will be 50 percent. There is an individualization of the economy going on, and it happens quickly in Sweden,” she says.

Robin Teigland, a researcher at the Stockholm School of Economics, who has written a report on the new ‘sharing economy’ with Anna Felländer, believes that our revenues will come from many sources, and even from machines working for us.

“They can make money for you without you having to do anything except maintaining the machine or the resource,” she says.

Per Johansson, photo: Agnes Wallner.

Per Johansson, photo: Agnes Wallner.

Per Johansson, a former researcher in human ecology, and now part of the think tank Infontology, believes in a similar trend but he wants to make a difference between jobs and work. He says that jobs are functions in systems with roots in bureaucracies and military chains of command — tasks with some kind of mechanical character. Work is instead what we do in life — anything that requires effort.

“We need to shift focus towards other human urgencies and tasks — areas in which I think there will never be any shortage of work. It’s a kind of of change of fantasy we have to undergo,” he says.

How difficult it will be depends on how indoctrinated we are in the old beliefs, according to Johansson.

Roland Paulsen, foto: Jessica Segerberg.

Roland Paulsen, foto: Jessica Segerberg.

Roland Paulsen, PhD in sociology at Lund’s University, goes further.

“I see work as the biggest environmental problem we have. Nothing consumes so much of the Earth’s resources,” he says, referring to the objective of high employment rates, despite automation effects, which leads to unnecessary production and consumption.

No matter which picture of the future of jobs you choose, most believe that the distribution of wealth will be a major challenge. One reason is the risk of accumulation of profits to giant corporations with high automation rate. Another is the risk of very large income gaps. A third is that the self-employed may be exposed to fierce price pressure through global digitization.

“We will need to improve individuals’ negotiating position through regulation,” says Robin Teigland.

One solution being discussed is unconditional basic income — a kind of base salary to everyone, sufficient for covering basic costs in life, and replacing all other social programs in society. Many are skeptical, including former Swedish Prime Minister Fredrik Reinfeldt, even though he early on recognized the risk that jobs can be automated.

“I see it as a left wing alternative that has occurred in the debate (…) where we in fact establish a society where a large percentage of the population will never get any job because they will live on the citizen salary,” he says.

But proponents see great opportunities.

“It is important to focus on what a citizen salary would make possible if people did not have to worry about providing daily food and housing — a freedom from adapting to other people’s agendas, which the old way of governance and managing means, and then new businesses can emerge. In a best case scenario this leads to an adaptability which I think is key to solve these problems,” says Per Johansson.

Roland Paulsen notes that a citizen salary may have surprising effects.

“You would have to raise the salary of crappy jobs that no one wants to do if they don’t have to. An upside-down world where the least attractive jobs would become the best paid. But I still think that a basic income can lead to the development of new ethics and a new way of living together,” says Roland Paulsen.

– – – –

Machines are taking over the jobs

  1. The report “The Future of Employment” from Oxford University (2013), by Carl Benedikt Frey et al, states that 47 percent of the jobs in the United States are at high risk to be automated within a few decades. The report also claims that the entry of machines will occur in two waves. First low-wage jobs with low educational requirements will be hit, for example in transportation, logistics, production and administration. Then there will be a pause while technology is evolving to embrace also creative and social abilities, and at that point also well paid jobs with high educational requirements will be affected.
  1. In the report “Every second job will be automated within 20 years” from the Foundation for Strategic Research (Stiftelsen för strategisk forskning) — a Swedish adaptation of the Oxford report — the author argues that 53 percent of Swedish jobs run the same risk.
  2. Gartner expects that one in three jobs will be done by software, robots or smart machines 2025.
  3. Pew Research Center’s report “AI, Robotics, and the Future of Jobs” is based on answers from 1896 technology experts and analysts. 48 percent of them thought that technology will displace more jobs than it creates, while 52 percent believed the opposite.
  4. The report “New jobs in the automation era” from the Foundation for Strategic Research shows that 450 000 jobs were lost in Sweden through automation between 2006 and 2011. New jobs and increased demand for local services due to higher wages for those who still had their jobs, replaced three quarters of the lost jobs. Reforms in the labor market contributed to a total rise in employment.

– – – –

This can become Sweden’s role

What can be the role of Sweden for handling the transition when jobs are being automated?

Per Johansson: “We may be good at finding a new imaginative and realistic optimism. You have to observe all sorts of negative things and limitations clearly, but if you create a confidence in the future that is effective enough, then you have the antidote you need against destructive tendencies that strive back to the Middle Ages.”

Roland Paulsen: “To be a leading country. Economic globalization is not the only one, there is a political / ideological globalization too. That the Washington Post reported on the experiment in Gothenburg (six-hour work day), says something about when a country is at the forefront, it puts pressure on other countries. And Sweden has a long tradition of being a political frontrunner.”

Robin Teigland: “Sweden is at the forefront in sharing economy, in robotising of jobs, and in IT and Internet penetration. Many people look at us and see how we do. We can sell our services and our knowledge. All countries are facing this and people come to us. Sweden can also be a test market, where it is easier to push change than in the US for example.”

– – – –

A part of this report was first published in the Swedish magazine Digital Teknik.

The Italian edition — Un’invenzione impossible — is finally out!

AII_cover_it_200px(This post was originally published on

I’m happy to announce that the Italian edition of my book An Impossible Invention — Un’invenzione impossible — is finally out. I’m particularly satisfied since the story is closely related to Italy to which I have personal connections, my wife being Italian.

A great thanks to Alex Passi who has made the translation, and who preferred not to be compensated but instead asked me to donate part of the sales revenue to scientific research, which I will do. What research that will receive the donation is still to be defined.

Read more here.

Rossi has been granted US patent on the E-Cat — fuel mix specified

(This post was originally published on

On August 25, Andrea Rossi was granted a patent on his LENR based heating device the E-Cat. The patent, which has the filing date March 14, 2012, can be downloaded here: US9115913B1

As far as I understand, the patent describes the so-called low temperature E-Cat that Rossi showed in semi-public demonstrations at several occasions in 2011, and which is also used in an ongoing 350-day trial of a 1 MW plant, but since it describes core parts of the technology it is probably also valid at a certain extent also for more recent E-Cat models with higher operating temperature.

Read more here.

Watch out for fintech – banks need to wake up!

Stockholm is singled out as the world’s hottest spot for startup companies in the financial and banking sector–Fintech. Now experts are warning big banks for not being sufficiently innovative.

Katarina Segerståhl. Photo: Tomi Parkkonen

Katarina Segerståhl. Photo: Tomi Parkkonen

“Sure, certain banks are technology mature, but I do not know if that should be called innovation. What has your bank done for you in recent years that is innovative?”

Erik Wetter, director of the incubator Business Lab at Stockholm School of Economics, where the successful Swedish payment startup Klarna started, is asking the rhetorical question.

He states that Stockholm has become perhaps the world’s hottest spot for fintech–young tech companies in the area of ​​banking and finance. With innovative services and smart technology, they see opportunities to challenge traditional banks.

“One of the largest US banks can make four releases a year in their system that handles a trillion dollars per year. We make four releases per day. It says something about the speed with which we are testing products and how users feel, think and behave–in a completely different manner than the banks can do, says Erik Engellau-Nilsson, marketing manager for Klarna.

Motivation in fintech startups should be increased by the fact that banks belong to an unusually profitable industry. Or, as former Swedish Prime Minister Fredrik Reinfeldt put it at the seminar Bankdagen 2015 in Stockholm recently:

”The banks make fantastic profits and it is only to be congratulated. But if you’re making big profits, there’s a risk of being challenged. Such large profit margins are difficult to defend over time.”

Erik Wetter thinks that, as in other oligopolistic industries such as aviation, it will be difficult for incumbent banks to meet the challenge from startups. He is backed by Robin Teigland, researcher at the Stockholm School of Economics, who believes that banks will be consolidated or may well fail in the long term.

The question is what banks are doing, and what they should be doing.

Erik Wetter notes that IT giants such as Google and Apple are basing their innovation on acquisition of innovative companies, while banks do not seem to take that opportunity. As an example, the new Swedish venture capital company NFT Ventures, focusing on Fintech, does not have a bank as lead investor, but the Swedish media company Bonnier. And the major Swedish bank SEB’s most successful venture capital investment is said to be the musical Mamma Mia, but they did not invest in the successful fintech startups Klarna or Izettle.

”That is symptomatic,” says Erik Wetter.

Robin Teigland. Photo: Jörgen Appelgren.

Robin Teigland, photo: Jörgen Appelgren.

Robin Teigland, however, does not think that banks should buy innovation.

”It’s an old mindset that you have to own. It is better to collaborate,” she says.

She thinks that banks should experiment more, internally and externally. For example she suggests internal projects with crowdfunding, where employees get a certain amount of money to invest in various projects.

”How many bank employees have tried crowdfunding, or peer-to-peer lending? And how many have tried Bitcoin? If you do not understand how things work, how will you be able to decide?” she asks.

Both the banks themselves and startup companies believe that user experience has become particularly important. The Finnish consulting firm Tieto that organized the Bankdagen 2015, notes in a report that seven out of ten banks see digital customer experience as the key to efficient operations. Yet only twelve percent of the companies are using data from different sources to individualize services.

”Consumers are often one step ahead of the banks on this. If you need a loan, it may be perceived as less risky and easier to go to a peer-to-peer service, and you do not have to worry about credit approval. And those who need funding for an idea can go to Kickstarter instead of a bank branch,” Katarina Segerståhl, director of strategic design at Tieto, notes.

“This means that banks need to approach consumers even more and create new services along with them,” she adds.

Elísabet Grétarsdóttir. Photo: Christian Rhen.

Elísabet Grétarsdóttir, photo: Christian Rhen.

In the end, the question of understanding the customers might be about culture. That is what Elísabet Grétarsdóttir believes, who a few years ago was recruited from the gaming industry to become marketing manager for the Icelandic Arion Bank, which was formed from the collapse of Kaupthing Bank.

She points out that banking culture has been shaped over hundreds of years, and that it is hierarchical and risk-averse–and rightly so since financial systems are an important backbone of society.

But today this culture is blocking innovation, Elísabet Grétarsdóttir says, and she believes that many in the banking industry do not see this.

“They live and dream this culture, and when something in your environment is a part of you, you stop to see it,” she says.

As an example, she takes the dress code in the banking world.

“Dress codes were important 50 or 60 years ago, when people thought that suits inspired confidence and trust. Today, consumers’ values ​​have changed and they don’t see trust in an outfit. Instead, they build trust when they feel that someone really cares about their well-being. That’s how brands need to operate, and this is where financial institutions have a hard time connecting with consumers. They have different value systems, says Elisabet Grétarsdóttir, who just left the Arion Bank for a job at the EA owned Swedish gaming company Dice.

Maybe her new career indicates where the knowledge that the banks need is to be found today.

”We do not know where banking services will be offered in ten years from now–if it is from an organization, a local player or perhaps in a global online game, says Robin Teigland.

– – – –

Here’s what fintech companies focus on

1. Information–better information through efficient apps and data analysis.
Example: Comparisons, overviews, negotiation services and more.
2. Capital Sources–access to new sources of capital.
Examples: Crowdfunding–both as donation and investment to equity. Peer-to-peer lending–services that facilitate loans between people and/or organisations in different target groups.
3. Transactions–services that create new ways to manage transactions.
Example: New payment services, billing services, use of mobile phones as payment terminals.
4. Global Services–services that offer the same simplicity and security internationally as many users expect only in their own country.
Example: Private sales, return of goods, delivery guarantee, credit risk.

– – – –

16 Swedish fintech startups

Klarna – payment services for e-commerce.
Mondido – payment services for e-commerce.
Izettle – turns mobiles and tablets into a payment card terminal and cash register.
Trustly – allows e-stores in several countries to offer direct payment from banks.
Sitoo – common cash register and inventory system in the cloud for e-store and physical store.
Betalo – pay your bills with payment cards.
Toborrow – digital marketplace for business loans (peer-to-peer loans).
Kortio – service for the comparison of payment cards.
Funded By Me – crowdfunding, both as donation and investment.
Lånbyte – negotiating mortgage rate for consumers.
Lendo – find the loan with the lowest interest rate.
Bolånegruppen – negotiates mortgages for groups of people.
Tessin – crowdfunding for properties.
Leasify – optimization of leases and contracts.
Share Travel – allows users to share information on how they invest.
Tink – giving consumers overview of their private economy.

– – – –

This post was originally published in Digital Teknik, in Swedish.

Here’s Swedish LENR company Neofire

This post was originally published on


Peter Björkbom -- photo: Mats Lewan.

Peter Björkbom — photo: Mats Lewan.

Apart from the well-known companies with LENR based technology, such as Andrea Rossi’s partner company Industrial Heat, and Brillouin Energy, founded by Robert Godes, there are a series of small rather unknown companies that have popped out in the last few years.

One of them is Swedish Neofire that surfaced in February 2015. It turns out to be founded in 2010 and run by one single person – Peter Björkbom – whom I came to talk with at ICCF-19 in Padua last week.




What to learn from an historical cold fusion conference — ICCF19

This post was originally published on


iccf19-logoLast week, the international conference cold fusion, ICCF-19, was held, and I would argue it was historical, for several reasons.

The first is the ongoing trial by Rossi’s and his US partner Industrial Heat of a commercially implemented 1 MW thermal power plant based on the E-Cat. From credible sources I get confirmation of what Rossi states — that the plant is running very well — which means that we should expect important results presented at the end of the 400 day trial, backed up by a customer who certifies the useful power output and the measured electrical input from the grid. Such results will be difficult to challenge.


Will LENR reach mass adoption faster than any other tech?

This post was originally posted on and on E-Cat World.

earthYou often hear that new technologies spread to reach global mass adoption at an ever increasing speed — from electricity, telephones, radio and television to PCs, mobile phones and the web.

The hypothesis seems accurate and also reasonable, given that the world is getting increasingly connected in several ways, both with regard to communications, transportation and commerce, but it’s actually not correct.

(Read more)


Time to dispel the streetlight paradox of energy

This blog post was originally posted on


streetlight_jokeThe current development in LENR, where things seem to be moving fast towards confirmation of a new energy source, could finally open a way to dispel what I call the streetlight paradox of energy.

It’s about time.

You’ve probably heard the joke about the drunkard who is searching under a streetlight for something he lost…

Read more here.


It seems big banks know about cold fusion

(This blog post was originally posted on


The oil price keeps falling. And most analysts seem convinced that they know the reason — it’s about supply, or demand, or Putin, or Saudi Arabia, or Syria or…

But what if it were something completely different, known only by top people at the world’s biggest banks. And you. That a new, clean and basically infinite energy source might replace oil (and gas, coal and nuclear).

Torkel Nyberg, who runs the blog, has studied this hypothesis for several years. And half a year ago he got what looks like a smoking gun.

(

Replication attempts are heating up cold fusion

(This blog post was first published on

The reactor used by Alexander Parkhomov.

The reactor used by Alexander Parkhomov.

In just a few weeks, the whole landscape of cold fusion and LENR has changed significantly and, as many have noted, 2015 might bring a breakthrough for LENR in general, with increased public awareness, scientific acceptance and maybe even commercial applications. This is great news.

For those who haven’t followed the latest events, let me summarize.

Most important is the apparent replication of the E-Cat phenomenon by the Russian scientist Alexander Parkhomov. On December 25, 2014, Parkhomov, a respected and experienced physicist, published a short report on an experiment where he had used a reactor similar to the one used by the Swedish-Italian group in the Lugano experiment with Rossi’s E-Cat, and with similar materials in the fuel.

This kind of replication, based on the information in the Lugano report, was what I predicted in the second edition of my book.


Unconditional basic income might be a brilliant idea

Dollar-NoteWhy would anyone suggest a basic income for everyone, and by the way, would it even be a good idea? Well, here you go:

Several studies indicate that machines will be able to to a large part of the jobs humans do today within a few decades, and looking at technology such as the digital assistant Amelia developed by IP Soft, it’s not hard to see that this is starting to happen already now.

One main issue for the society to deal with will be how to distribute wealth to people, when the model of salary-for-work will be broken. I invite policy makers, economists and others to start discussing this immediately since we will run into difficulties sooner than most people think.

So far, few new models have been proposed. One of them is a basic income for all citizens, which at first look could seem a weak solution. Most people would probably dismiss the idea intuitively and say that it wouldn’t work.

It turns out, however, that recent real world experiments show the opposite! In this video, Federico Pistono, author of the book ‘Robots Will Steal Your Job, but That’s Ok’, talks about some hugely interesting results from studies of Unconditional Basic Income.

In a randomized trial in rural India, where 12,000 people were given an unconditional basic income for 18 months, results were unambiguously positive in all ways.

For this to happen, there were some important conditions.

– The income must be basic, i.e. it must cover basic needs. In this case about 24 dollars per family per week.

– It must be distributed to everyone individually, also to children and elderly, though children below the age of about seven could have their income managed by the parents.

– It must be unconditional. No strings attached, such as ‘you need to buy food’ or ‘you need to bring children to school’. Every such condition costs money for control and increases the possibility of corruption. Just let people decide what to do with the money.

So what were the results? Look here:

– The adoption to receive the income was 93% after one month (after a few weeks people needed to have a bank account which turned out not to be a problem).

– Even though everyone received a basic income, labor, productivity and work increased.

– All measurable social indicators were better than in a control group without unconditional basic income.

– The total cost of the program was less than keeping existing social programs.

– People were twice as likely to have increased their productivity at work, they increased their livestock by 70% and they were more likely to increase income from work.

– There was a significant reduction in indebtedness, and an increase in savings.

– People were spending more on transport to school and they were more likely to improve their house, supply of clean water etc.

– There was an improvement in children’s weight for age and this was more pronounced for girls.

– People had more varied diets, and they were NOT more likely than others to spend on private bads such as alcohol or tobacco.

– And FINALLY: People were three times as likely to start a new business or production activity as others!

Or as Pistono puts it:

‘The moment you don’t have to worry about money for survival, that is the moment when you can use social your social capital and your intelligence to actually start something meaningful.’

One important aspect of this is that many people might choose to do voluntary work or other activities that are not considered ‘profitable’ in our society today. Decoupling income from work actually seems to liberate large amounts of activity, but it’s hard to believe since most of us have a strong feeling that if we’re not productive, we’re not entitled to an income.

On the other hand, from a larger perspective you could ask what it is to be productive. Productive for whom? For your employer, or… for humanity and the world?

Several other studies confirm the results from India, Pistono reports. Most of them have been done in the last few years, and in India there are now plans for a large scale study in 1,000 villages.

Meanwhile, a referendum for unconditionally basic income of about 2,500 euros is planned for in Switzerland (although it’s not obvious that the results from a country like India can be translated to a western rich country).

In the end you could ask this question: Why do we do anything (except basic needs such as eat, sleep reproduce)?

You could search for the answer at many levels. My answer is at the deepest existential level: The property to develop and move ahead seems in-built in every single part of the Universe, and it’s an intrinsic part of us.

This is the force that makes everything in the Universe self-organize, that have made particles form atoms and molecules, that made the DNA molecule to form and life to appear, and that can be observed in ourselves through the fact that once we have discovered a better way to do something, it’s virtually impossible for us not to do it that way.

We just cannot help developing. It’s part of us and irresistible.

I believe this is one of the most fundamental forces in the universe. I have no good explanation for it’s deepest origin, but I’m convinced it must be embraced.

Decoupling income from work seems to be a good way to do embrace this fundamental force in us, and it also seems possible for the first time in history. I believe we should continue to seriously investigate this possibility. 

Here’s my Youtube channel on technology driven change

Technology is changing our world at an accelerating pace, and the change is going faster than most people would think.

This is the theme that interests me the most and that I’m passionate about, and it’s also the theme that I regularly give talks and seminars on, at conferences and to various institutions, government agencies and corporations.

YouTube_logo_300px_jpgAnd to share my views and ideas in one more way, I have launched a YouTube channel. At start I have a few videos on topics such as accelerating technology, digitalization, industries being exposed to fundamental change, artificial intelligence and super human intelligence.

Building up the content takes time, which is my most scarce resource, but I’ll do my best to upload some of those fascinating nuggets of scientific and technological news I regularly get across, adding my views and thoughts.

Please don’t hesitate to get back with comments and suggestions!

The second edition of “An Impossible Invention” is out

AII_cover_eng_200pxTo everyone who has read my book ‘An Impossible Invention’ — thanks for all your support so far! Everyone of you has meant a lot to me, and also all personal messages, emails, reviews, comments, text messages and even phone calls that I have received from all over the world.

Now a second edition of the book is out, in English and in Swedish.

It’s available as e-book, both at An Impossible Invention Shop and through Amazon. It’s also available as paperback through Create Space and within a few days through Amazon. Click here for details on how to get the book.

An Italian translation is underway and will hopefully be released in the beginning of next year.

Read more at the book’s web site .

Here is how we could coexist with a superintelligence

superintelligence 2The idea of a superintelligence might be frightening. I have touched the subject before and I also discussed why human values (hopefully) might be important to a superintelligence.

But we don’t know. Professor Nick Bostrom at the Future of Humanity Institute at Oxford University believes that superintelligence could put all humanity at risk, and he’s doing research on how we could prepare for such a technology and make it inherently safe, before we build it. Because, as some think, we will only get one chance.

The assumption is that intelligence is more powerful than anything else, and that human intellect can never compete with a superintelligence — an entity that might be to us like we are to a rabbit. Or an ant.

But there’s a small possibility that humans will be able to match the capabilities of something far more intelligent. Not being the way we are today, naturally. Let’s see.

I’ve been thinking about what entrepreneur and research director at Google, Ray Kurzweil, usually predicts — that humans probably will integrate with technology to increase our cognitive capacity. I discussed this recently also with Danica Kragic Jensfelt, professor in robotics and computer science at the Royal Institute of Technology.

She thinks that it will be difficult for humans to accept that we might merge with different kinds of technology, since we will not longer know for sure what a human is.

“It’s frightening,” she says. “We have been humans for so long.” Yet she finds this perspective more likely than the classic science fiction they-will-fight-us-scenario.

I also considered what Ergun Ekici, VP for emerging technologies at IP Soft that develops the AI system Amelia, told me — that machines won’t take jobs from humans. His view on technology such as artificial intelligence is that it helps humans, moving the bar upwards on what is possible for people to do, alone or in a group, all the way from those least trained to those who are real experts in an area.

My concern though, has always been that when machines reach the intelligence level of humans, there’s nowhere to push that bar upwards. Machines will simply replace us.

Which is not that bad, sincerely. I sometimes think it’s bad luck belonging to the last generation that had to work…! And I believe there are lots of good aspects of this, if automation and AI can provide good conditions for everyone to lead a good life at low cost. Humans could then concentrate on developing their skills and passions, and share them with others.

But… if we return to the concept of superintelligence — the hypothesis is that an intelligence explosion might lead to entities that are not at all interested in humans, and might not consider us important to preserve. Which is bad.

It struck me however, that I would be quite happy if I could integrate with a system that would enhance my cognitive capacities, helping me to sift through enormous amounts of information at no effort and also to write pieces like this or other stories — which is my daily work — in a few seconds.

Now, what would this let me do?

Well, if the hard work is done in seconds, I might be able to grasp concepts at a higher abstraction level.

Ray Kurzweil, who has a theory on how to create a mind, describes our mental system as a hierarchical structure of abstraction levels, where we apply pattern recognition at each level, all the way from dots and lines to abstract concepts such as irony.

And here’s what struck me: There’s an obvious limit to the human brain’s level of intelligence, but I can see no immediate limit to its possible level of abstraction, provided that the underlying information process at lower abstraction levels is taken care of.

So this is the trick: If we integrate with cognitive systems that efficiently take care of abstraction levels up to a certain point, the human brain might be able to climb on top, using its creative capacity, developing a new level of abstraction, no matter how high. And match any superintelligence.

Also, this might be one possible way in which a superintelligence could emerge for the first time.

There are a few catches however.

You could compare this idea to how our mind works today. It’s not very different since vast portions of the information processes that support our conscious mind are unknown to us. Building a higher level mind on top of a machine intelligence would not be inherently distinct.

The main difference is that we are quite sure that we can trust our unconscious mind, since we’ve grown up with it for a life time, and since methods for manipulating it, e.g. those pictured in the movie Inception, are not yet well developed, even though research on how to eliminate targeted memories in the brain is going on.

Trusting an artificial mind, which undoubtedly would be connected to the Internet, is quite another thing. To reach sufficiently fast levels of interaction with our mind it will most probably have to be directly integrated with our brain. And even though trust could be built, as it often is with new technologies, by seeing that it works and is safe, the security issue cannot be underestimated.

It can reach all the way from the risk of malicious manipulation to commercial offerings of tuning your thoughts, in exchange of some free stuff, just like today. But kind of different…

Another question is the time needed to train people to interact with such a mind, learning to reach new levels of abstraction, which is difficulty to assess.

Yet I believe that we could see this as a possible way of building a superintelligence, with humans in the loop, hopefully limiting the intrinsic dangers in the power of an intelligence explosion.

Amelia understands what you say, and acts

Ip_Soft_Amelia_cutAmelia is a new hire at a call center. She answers in two seconds, solves the problem in four minutes instead of an average of 15 minutes, and customers are quite happy. Amelia never goes home.

Amelia is what might be the most advance artificial intelligence so far on Earth. Developed by US based Ip Soft that normally sells autonomous systems for management of large it systems — you know, replacing human intervention.

For 15 years Ip Soft has been working on a secret side project, developing a cognitive system with the aim that it must:
1. understand natural language
2. learn through natural language
3. leverage what it has learnt to solve problems

The system was presented last month, and a few days ago Ip Soft visited Stockholm to show the technology to some 50 potential customers.

Proof of concept has already been performed with Amelia working in customer service or internal help desk at a handful of large American companies during the last year.

Gartner was allowed to talk to these companies and states that Amelia is the next level up from IBM Watson, the system that won over humans in Jeopardy in 2011 and now works, helping doctors to diagnose cancer. In the report Gartner says:

Gartner verified over 10 direct examples of IPsoft client references that all unanimously supported productivity benefits of a much higher magnitude (consistently over 50%) than any other managed services offerings in the industry. It’s rare to find a unanimous endorsement of this type. One client example had 56% of its client incidents resolved without human intervention and a 60% reduction in the mean time to resolution for its IT service desk.

One client — an oil and gas company — wanted to verify that Amelia could handle a helpdesk situation on an oil rig and let her digest a manual for a centrifugal pump for a few seconds. They then asked her questions.

“Which are the parts of the pump? What could a ticking sound be a sign of?”

I got a demonstration of the same test, and Amelia answered promptly without hesitating — short and concise answers.

Most impressive though is maybe how she learns. When she first started to work in call centers she often had to pass questions on to a human agent. But Amelia then stayed on the line, listening to the conversation, learning from it. And after 30 days she reached the efficiency levels Gartner reports.

“The best method for Amelia to learn is like for you, by doing things yourself. Amelia learns from interaction with people. During the first 30 days her learning is exponential as compared to the lab,” Ergun Ekici, co-founder and VP of emerging technologies at Ip Soft, told me.

The funny thing is that Ekici says he has a hard time believing that machines could one day become as intelligent as humans. He sure has a point though, saying that after 15 years of research in this field, he has grown a tremendous appreciation of human intelligence.

IP Soft says Amelia will be introduced commercially within a month or so.

CEO of the company is Chetan Dube. It’s privately financed by Dube’s family.

Interview on radio show Free Energy Quest tonight

Tonight, Thursday October 9, I’ll be interviewed by Sterling Allan on his radio show Free Energy Quest, at 3 pm PST, midnight Central European Time (CET). You can listen live here. We will be talking about my book An Impossible Invention, and about the recent third party report on the E-Cat.

The show will be available for listening also afterwards.

New scientific report on the E-Cat shows excess heat and nuclear process

This blog post was originally published on

The reactor used in the test is made of alumina and is significantly thinner than earlier hot E-Cat reactors.

The reactor used in the test is made of alumina and is significantly thinner than earlier hot E-Cat reactors.

A new scientific report on the E-Cat has been released, providing two important findings from a 32-day testrun of the reactor — together leading to the clear conclusion that the E-Cat is an energy source based on some kind of nuclear reaction, without radiation outside the reactor.

The first is an energy release which puts the reactor way beyond conventional (chemical) sources of energy.

The second is a dramatic shift in isotopic composition in the fuel after the testrun, meaning changes have occurred in the atomic nuclei of the elements present in the fuel.

The report is entitled “Observation of abundant heat production from a reactor device and of isotopic changes in the fuel” (Download here) and is written by Giuseppe Levi, Evelyn Foschi, Bo Höistad, Roland Pettersson, Lars Tegnér and Hanno Essén, all of whom also wrote an earlier third party report on the E-Cat.

In the concluding remarks they write:

“In summary, the performance of the E-Cat reactor is remarkable. We have a device giving heat energy compatible with nuclear transformations, but it operates at low energy and gives neither nuclear radioactive waste nor emits radiation. From basic general knowledge in nuclear physics this should not be possible. Nevertheless we have to relate to the fact that the experimental results from our test show heat production beyond chemical burning, and that the E-Cat fuel undergoes nuclear transformations. It is certainly most unsatisfying that these results so far have no convincing theoretical explanation, but the experimental results cannot be dismissed or ignored just because of lack of theoretical understanding.”

The authors are very careful not to make any decisive conclusions on how the reaction occurs. Yet, they make some interesting remarks, among them considerations on similarities with observations in astrophysics.

Without any optimization with regard to input power, the reactor produced between 3.2 and 3.6 times the input power, and a total energy of 1.5 MWh from about 1 gram of fuel. The reactor was switched off according to plan, with no signs of the reaction slowing down. As I point out in my book An Impossible Invention — an energy source of this kind will have huge consequences for humanity, possibly solving a series of global issues.

In order to avoid doubts that were presented with regard to their earlier report, several things have been changed: The measurement was performed during 32 days in a neutral laboratory in Switzerland, electric measurment on the input power has been improved, a 23-hour test of the reactor without charge was done in order to calibrate the measurement set-up, and chemical analysis of the fuel before and after the run has been performed with five different methods.

The report has been uploaded to which, however has put it on hold, without specifying any motive for this. It has also been sent to Journal of Physics D. I got the report sent to me by Hanno Essén who said that he now considers it to be public, although not supposed to be published in any commercial journal until further notice from Journal of Physics D.

I asked Professor Bo Höistad, one of the authors, a few questions on the report:

Mats: What do you consider to be the most important take-away of the report?

Höistad: That we have been able to do an isotopic analysis of the fuel before and after running the process, and that the results indicate the presence of nuclear reactions in the process.

Mats: What have you done differently this time, based on the experiences from your last measurement and report?

Höistad: An accurate measurement, particularly the control of energy balance without fuel in the reactor, and a isotopic analysis of the fuel.

Mats: What reactions do you expect on the report?

Höistad: Hopefully that the interest in the possibility of achieving LENR reactors get a decent boost, and that critical overtones in the debate are downplayed in favor of scientific discussions.

Mats: What do you personally feel facing the inexplicable observations you have made?

Höistad: As pointed out in our paper, we face a phenomenon without explanation. However, we can not categorically reject the clear experimental results just because a credible theory is currently lacking. We need to relate to the actual experimental results and continue the investigations to gain more knowledge about the LENR phenomenon.

An Impossible Invention on Amazon — second edition upcoming

(This blog post was originally published on


Lots of people have asked me to make ‘An Impossible Invention’ available on Amazon in order to reach a broader audience. So I did — now there’s an e-book version in Amazon’s Kindle format listed here.

The paperback version will get there later for a simple reason:

I’m now working on a second edition of the book with minor updates and corrections. A more detailed examination of the legal saga of Andrea Rossi will be added, as well as an update reflecting the findings about the company Defkalion’s technology, revealed in early 2014.

I’m also waiting for the upcoming third party report on Rossi’s E-Cat, which is expected to be published shortly, in order to include this report and comments on it in the second edition.

As soon as the new edition is ready it will be available on Amazon, initially as an e-book, and later also as a paperback through a print-on-demand service.

The support I have received from all those who have ordered the book so far, and through all emails, phone calls and overwhelming reviews, has meant a lot to me, and I’d like to offer everyone who has read the first edition a free download of the second edition as an e-book. If you ordered the book through you will receive an email with this offer, once the second edition is published.

Thank you all!

Artificial baby mind learns to talk

In the last months I have been immersed in exciting projects, while also keeping up with how the story told in my book An Impossible Invention continues to evolve.

There’s been so much on the theme of The Biggest Shift Ever that I would have liked to share in blog posts, so much fascinating science and tech news flowing towards me every day in the newsroom, depicting a world in accelerating innovation and change. But I just haven’t had the time.

Meanwhile I try to share parts of this flow on Twitter, so please follow me there if you would like updates more often. Hopefully I will be able to be more active here in a few months.

Today I just wanted to share one of the most intriguing pieces of research I’ve come across lately — an artificial simulated toddler learning to talk while interacting with its ‘caregiver’. Just watch this amazing video:

The project which involves computational models of the face and brain, combining Bioengineering, Computational and Theoretical Neuroscience, Artificial Intelligence and Interactive Computer Graphics Research, is being developed at the Laboratory for Animate Technologies at the University of Auckland in New Zealand.

I wouldn’t claim that AI has reached the point of a human baby mind just yet, but I think this is another clear sign that we’re on our way to get there.

Swedish National Radio paints it black

(This blog post was originally posted on

Sveriges Radio logoThe scientific newsroom of Sveriges Radio, the national Swedish Radio, has dedicated four months of research and a whole week of its air time to the story of Andrea Rossi, the E-Cat and cold fusion (part 1234), and I’m honored that it has made me one of its main targets.

The result, however, is not impressive.

Ulrika Björkstén, head of the scientific editorial staff, has chosen freelance journalist Marcus Hansson to do the investigation.

Hansson apparently likes easy solutions. Black or white. I won’t go into detail of his analysis of Rossi’s background since I have no reason to defend Rossi. I’m just noting that Hansson believes he can sort out the truth in the twinkling of an eye in Italy, which is known as one of the most corrupt countries in Europe where the mix of powerful interests, politics and the judiciary is not always easy to penetrate.

I’m also noting tendentious conclusions such as being sentenced to prison implies being an imposter, and non-proven claims such as storing toxic waste in leaking cisterns equals the Mafia’s way of dumping such waste in secret pits.

After his analysis of Rossi, Hansson adds a group of Swedish researchers and the Swedish power industry’s research entity Elforsk, depicting them all as a bunch of gullible fools being used by Rossi for his purposes, and pointing at me as the one who got them involved in the first place. I’m flattered.

Hanson considers all this obvious, basing large parts of his report on the testimonials and opinions of Italian-French writer Sylvie Coyaud, scientific blogger for the weekly Italian style magazine D-La Repubblica.

But all this is only half of the problem.

Hansson starts his reportage by stating that the famous claim by Fleischmann and Pons in 1989, of excess heat compatible with a nuclear reaction, was wrong and later explained by erroneous measurements.

I believe he’ll find that hard to prove, given that there in 2009 were 153 peer-reviewed papers describing excess heat in experimental set-ups such as the one used by Fleischmann and Pons. And that’s only one of many reasons.

I discuss this in the beginning of my book. Hansson says he read the book and found it to be a tribute to Rossi. Coyaud says it’s a story where Rossi is Messiah and I am the Prophet. That’s poetic, but it’s an opinion.

Among those hundreds who have read it, about fifty persons have written reviews, most of them giving it the highest vote. A series of highly competent people with insight in the story thought it was well balanced.

I do discuss Rossi’s problematic background in the book, and when that’s done I discuss his problematic personality.

But the main focus I have chosen is another, reflecting the title of the book, discussing what is considered to be impossible and asking why more resources aren’t dedicated to investigating this strange phenomenon that could possibly change the world, providing clean water and clean air, saving millions of lives and solve the climate crisis.

Not because I wish this to be true, but because there are abundant scientific results indicating that the phenomenon might be real.

It’s insane that curious researchers are hesitating to enter this field for fear of ruining their careers (yes Björkstén, this is why most of them are old), and it’s insane that poorly researched media reports like this help scientific critics to continue attacking those researchers.

Marcus Hansson says he has read my book, but maybe he hasn’t understood what he read. In fact I’m worried that neither he nor Coyaud have the competence to evaluate this complex story from a scientific perspective. I might be wrong, but from Hansson’s reportage I’m not convinced.

What I find more problematic though is the position of Ulrika Björkstén, head of the scientific editorial staff at Sveriges Radio, holding a Ph.D. in physical chemistry. I agree with most observers that it’s not proven whether Rossi’s E-Cat works or not, and Björkstén might of course be convinced that it’s not working.

But in a concluding comment Björkstén discards the whole area of cold fusion/LENR as pseudo-science, stating that it is based on belief and group thinking, and that university researchers should discern such research from real science and stay away from it.

I find this alarming both from a journalistic and a scientific point of view. Such opinions have often been expressed regarding disruptive discoveries, and if we took advice only from people like Björkstén we would probably not have any airplanes or semiconductors today.

I welcome serious critic of my reports and of my book, but this reportage does not qualify. I’m not impressed, and I hope that the next scientific news team that decides to evaluate this story and my book will set the bar higher.

You might agree with me or not. If you have an opinion, I would suggest that you write an email to Ulrika Björkstén who oversaw the production of this reportage. Marcus Hansson probably just did his best.

– – – –

N.B. This is my personal opinion and not a statement from Ny Teknik. UPDATE: Here’s an official op-ed by Ny Teknik’s chief-editor Susanna Baltscheffsky. And here’s a piece by the Swedish researchers who have been involved in tests.

Defkalion demo proven not to be reliable

(This blog post was originally posted on

Alexandros Xanthoulis at Defkalion's demo in Milan July 23, 2013.

Alexander Xanthoulis at Defkalion’s demo in Milan July 23, 2014. Photo: Mats Lewan

The measurement setup that was used by Defkalion Green Technologies (DGT) on July 23, 2013, in order to show in live streaming that the Hyperion reactor was producing excess heat, does not measure the heat output correctly, and the error is so large that the reactor might not have worked at all.

This is the conclusion of a report (download here) by Luca Gamberale, former CTO of the Italian company Mose srl that at that time was part of the joint venture Defkalion Europe, owned together with DGT.

The report is based on experiments, performed mainly after the live streaming, using the same setup but without the reactor being active. Yet, the experiments showed that it was possible to obtain a measured thermal power of up to about 17 kW, while the input electric power was about 2.5 kW.

I asked Gamberale if this erroneous result could have been present without DGT realizing it.

“To obtain this effect it’s necessary to operate two valves in a certain way, so you need to have the intention to do it,” Gamberale told me.

Those of you who have read my book ‘An Impossible Invention’ know that Defkalion was an early partner to Rossi, supposed to build applications using Rossi’s reactor as a heat source. When Rossi ended the agreement with Defkalion in August 2011, Defkalion stated that operations continued, and later Defkalion claimed to have developed its own similar technology, producing heat from a reaction involving nickel and hydrogen.

Test results and measurement data were never disclosed, but in July 2013 Defkalion finally decided to make a public demo, live streamed during the cold fusion conference ICCF 18. I was present at the demo on July 23 in Milan, Italy, and referred my impressions in two blog posts here and here, trying to be as objective and neutral as possible, since I believe that my readers should draw their own conclusions.

“If you believe the values presented…”, I wrote, and that was also the main problem. It was not easy in a short time frame to verify possible errors or hidden mechanisms, specifically since Defkalion didn’t accept changes in the setup, and therefore it was not evident that you should believe the values. I reported them as presented though. 

Gamberale describes in the report that before the demo, Mose had proposed a series of improvements to the measurement setup in order to make it more reliable but that DGT did not allow these changes. He notes that the lack of cooperation made it necessary to carry out independent verification tests.

The tests focused on a possible malfunction of the digital flow meter used to measure water flow in the setup. It was shown that by decreasing the input water flow to almost zero, the flow meter started to make fast movements back and forth, and since the direction of the flow was not registered by the flow meter, these fast movements resulted in a reading corresponding to a relatively high flow, although the flow was almost zero.

Since the calculation of thermal heat was based on how much water was heated by the reactor, this measurement error resulted in a large calculated thermal heat output, while the actual thermal heat was much lower.

The explanation is thoroughly discussed in the report. Most important, however, is the fact that Gamberale with the experiment has proved that the setup could produce readings of large amounts of excess heat, without the reactor running, and that any result from the setup showing excess heat therefore is unreliable.

Gamberale explained to me that he presented these findings to Defkalion’s president Alexander Xanthoulis, and to Defkalion’s engineer Stavros Amaxas who was operating the setup at the public demo.

According to Gamberale, Xanthoulis said “Ok, we don’t know, this could be possible, but in any case we are sure that the reaction exists”.

Gamberale described Amaxas’ reaction to be much stronger. Defkalion’s CTO John Hadjichristos was not present at that meeting.

In his report, Gamberale also notes that Mose srl has given DGT some time to provide evidence that its technology is real, despite the findings presented, but that after several months, no answer has been given.

As I write in my book, Gamberale and the president of Mose srl, Franco Cappiello, who told me that he had invested €1 million in the joint venture, decided to put all commercial activity on hold until Defkalion could carry out a measurement that dispelled their doubts. They later closed Defkalion Europe altogether.

I called Alexander Xanthoulis and asked for a comment. He didn’t dispute the result of the report but pointed out that the calorimetric set-up at the Milan demo was not made by Defkalion but by Mose. Gamberale confirmed this but explained that the set-up was made according to strict instructions from Defkalion, and that when Mose added some component, such as another independent flow meter or another method for measuring thermal heat output, these additional components were immediately removed by Defkalion personel without discussions.

Xanthoulis also said that he didn’t understand why Gamberale hadn’t asked these questions earlier during months of contacts and visits by Mose at Defkalion’s offices in Canada, and by Defkalion in Milan. Gamberale explained that he had tried to get the information he needed but that he was never allowed to make the measurements he asked for. Instead he described his role as one of an observer.

Finally Xanthoulis pointed out that the flow calorimetry measurements (measurement of thermal energy output by heating flowing water) were not important, but that the most important measurements were on the bare reactor, calculating the output thermal energy by measuring temperatures on various points of the reactor without heating any water (you then use a law called Stefan–Boltzmann law). He told me that these measurements had been sent to Gamberale twice.

“He sent an Excel spreadsheet with no explanation including a couple of incomprehensible graphs in which it was not even written what it was about. I felt almost offended. I’m asking a justification of an abnormal result regarding a claim of a nuclear reaction that would change the history of the world, and I get an Excel sheet without any specification of what it is,” Gamberale commented.

I got the spreadsheets from Gamberale. They contain temperature measurements in degrees Celsius on various points of the reactor and can be downloaded here (sheet 1 and sheet 2). I know they are accurate since Xanthoulis sent me one identical document, asking me not to publish it.

I have studied Gamberale’s report and I find it both detailed and convincing. It should make Defkalion’s case difficult.

Gamberale doesn’t accuse Defkalion openly for fraud, but he makes it clear that the Milan demo presented no evidence that the technology is working.

The doubts I have had towards Defkalion, described in my book, are obviously increased through the report. Some wondered about the uncertainty regarding Defkalion’s technology that I expressed recently in an interview by John Maguire at Q-niverse. One important reason was Gamberale’s report, which I had already received by then.

And while I write in the last chapter of the book that it’s hard to assess Defkalion, but that if its claims can be trusted, Defkalion might have made ​​the most progress among those working with LENR technology based on nickel and hydrogen, I now find it less likely.

Alexander Xanthoulis still claims, however,  that the development of the new reactor is on track and that according to the plans it will be certified with regard to safety and security by a Canadian certifying body corresponding to US Underwriters’ Laboratory within the next months. After that, Defkalion could start licensing the technology to partners. National licenses were previously offered at EUR 40.5 million, and though Xanthoulis told me that five contracts have been signed he also said that no money had yet been transferred.

But Defkalion will now have to present solid evidence to convince anyone that its technology is valid, and also let those people make changes to the test protocol and to the measurement set-up, if it’s necessary in order to eliminate uncertainties.

Gamberale told me that the findings he describes in the report could bring damage to serious research activities within LENR, but he also told me that he personally still believes that LENR is an important scientific and technological area and that he is getting involved in two other projects in this domain.

(Added on May 16): Gamberale has a PhD in theoretical high energy physics from the University of Milan, and at the Milan based Pirelli Labs he has further developed the theoretical work in coherent electrodynamics by his countryman, late Dr. Giuliano Preparata. Among his experimental work he has been assessing the technology of Black Light Power. He has also made studies on electrochemical loading of palladium wires.

A few more researchers who were never recognized

(This blog post was originally posted on

Impossible-paris_300pxAs those of you who have already read my book ‘An Impossible Invention’ know, it’s written in memory of Martin Fleischmann (1927 – 2012), Sergio Focardi (1932 – 2013) and Sven Kullander (1936 – 2014). All these three persons were important for my work, and they all left us while I was working on the book.

Sadly enough, several other researchers within the field of LENR and cold fusion passed away during the same period, and I would like to commemorate them too in this post (click on their names to get further information about their lives and their careers):

Talbot Chubb (1923 — 2011), Scott Chubb (1953 — 2011), P. K. Iyengar (1931 — 2011), John O’M Bockris (1923 — 2013) and Emilio del Giudice (1940 — 2014).

Again, if LENR/cold fusion turns out to be an important energy source that might bring fundamental change to the world, which you probably know by now that I personally believe, none of these researchers were ever recognized for their important contributions to the knowledge in this field.

If my book can contribute to raising public attention for LENR, and increase the possibilities to build on these researchers work in order to find out as soon as possible if there’s a way to make this technology useful for humanity, I would be more than happy.

So far I have been overwhelmed by the response to the book. Many have given me strong support, for which I’m very grateful, and a few have criticized me, which has given me the opportunity to go through the arguments for bringing this story to public awareness.

Nobel Laureate Brian Josephson made a short review of the book at, and you can read his review on the start page of

Frank Acland at E-Cat World made an interview with me, which is published here.

Several persons have written reviews that you can find at the book shop An Impossible Invention — Shop (you’ll find the reviews under each version of the book).

An intense discussion has been going on on my personal blog — “The Biggest Shift Ever”.

And many of you have emailed me directly with wonderful personal support. Thanks!

I’ve also found a few errors which have now been corrected in the e-book version:

The Italian words cappuccino and colazione were misspelled, as was the name of the road Viale Fulvio Testi in Milan, and also the name of the Italian steel mill company Falck (which I at one occasion called Salk). Due to an error in translation from Swedish, I put a binocular in the hands of Galileo Galilei, but of course he used a telescope.

As you know, this story is still unfolding and I’m receiving information that I will share in this blog, and that will also be added to both the ebook and the paperback in upcoming editions.

Stay tuned.

Here’s my book on cold fusion and the E-Cat

(This blog post was originally posted on


AII_cover_eng_200pxFor three difficult years I have experienced much that I wanted to discuss, that I had thought people would want to investigate and understand better. Yet reaching out has been difficult for me. I want you, the reader, to comprehend, forgive and then participate.

The term ‘cold fusion’ is so stigmatized that everything even vaguely connected with it is ignored by media outlets in general and by the science community in particular. Unless it’s attacked. Meanwhile we might be missing an opportunity to change the world.

That’s why I’m relieved today, when I can finally share this story in my new book An Impossible Invention. It’s about, yes, cold fusion.

It’s actually two stories. One story in the book is about cold fusion itself, about the inventor Andrea Rossi and his energy device the ‘E-Cat,’ about the people around him and about how I became involved and subsequently investigated and contributed to a series of on-going events in this scientific arena.

The other story in the book is about how people relate to the unknown, to the mysterious, to the improbable and to what we believe is ‘impossible.’ The story of how new ideas are accepted or rejected, of whether one is curious or uninterested, open-minded or prejudiced.

The book may reveal events surrounding Rossi and the E-Cat. It should inspire some readers and upset others. I hope it will provoke discussions—lots of discussions, among other things about what’s impossible or isn’t. Consider what the British runner Roger Bannister—the first human to run  a sub-four-minute mile, previously believed impossible—perceptively stated: “The human spirit is indomitable.”

Who knows what will happen? More is to come. You, the reader, will play an important role in determining how these matters evolve.

By the way–just as I’m writing these words I’m receiving new information on events that strengthen some pieces of the story in the book, and also some information that add to my doubts regarding certain stakeholders.  I cannot tell you more right now, but I will keep you updated in this blog and in the free newsletter of the book.

Google’s goal: To control the world’s data

The humanoid Atlas, developed by Boston Dynamics.

The humanoid Atlas, developed by Boston Dynamics.

In 2013, Google acquired eight companies specializing in robotics, and many have asked what Google will do with all those robots.

The eighth company wa

s Boston Dynamics, which through funding by DARPA has developed a couple of high-profile animal-like robots and the two-legged humanoid Atlas.

A week after that acquisition, Google became the world’s robot king when one of the companies previously bought won the final trials of the DARPA Robotics Challenge–a competition where robots are expected to manage tasks like climbing a ladder, punch a hole through a wall, drive a car and close valves. Second was a team that used the Atlas robot.

Google’s interest in robots should be seen in the broader context of its other ventures–everything from the digital glasses Google Glass and driverless vehicles to its established services–web search, maps, Street View, videos, the Android OS, web-based office applications and Gmail. Plus the latest big acquisition at $3.2 billion–Nest Labs, that develops the self-learning thermostat Nest and the connected smoke detector Protect.

The common denominator is data. Large amounts of data about what users are doing and thinking, about where they go and what the world looks like.

It fits the robot venture. As robots are becoming more capable they will perform increasingly sophisticated tasks and gradually take over many jobs from humans. During their work, they will collect huge amounts of data, about everything , everywhere in the world.

It is not obvious that Google will have access to all this data. Nest for example, has made ​​it clear that the company’s policy on privacy remains firm after the takeover, and that data from thermostats may only be used to ‘improve products and services’.

But Google has repeatedly demonstrated its ability to offer attractive free services where users willingly share their data in exchange for the service.

Added to this is Google’s focus on learning machines and advanced artificial intelligence — most recently through the acquisition of the British AI company Deep Mind for over $2 billion, and also through the recruitment of futurist and entrepreneur Ray Kurzweil as chief engineer last year (Ray Kurzweil’s latest book is called How to Create a Mind).

If it is possible to develop an artificial consciousness in a machine, one may ask how far such a consciousness reaches. One way to respond–which I touched in this post–is to relate to a human being that reaches as far as her body and its senses. An artificial consciousness would then by analogy be limited to the sensors it controls in order to collect data.

Google is then in a good position. And though I don’t believe that Google has any evil plans at all, this scares me far more than the surveillance in which NSA and other intelligence agencies are engaged, combined.

Interception and surveillance will never give nearly as much data about us as Google can get, and it can be regulated. What Google will do with all the data that we willingly share is something no-one else can control.

(This post was also published in Swedish in Ny Teknik).

Survival of the Fittest Technology

I already outlined the ideas of author and entrepreneur Ray Kurzweil, currently Engineering Director at Google, on exponentially accelerating technological change. His ideas are based on what he calls the Law of Accelereting Returns — the fairly intuitive suggestion that whatever is developed somewhere in a system, increases the total speed of development in the whole system.

The counter intuitive result of this is an exponentially increasing pace, which on the other hand is supported by observations; at this moment the pace of development doubles about each decade, leading to a thousandfold  increase in this century compared to the last.

I have also discussed the thoughts of Kevin Kelly described in his book What Technology Wants. Kelly suggests, i line with Kurzweil, that technological development is a natural extension of biological evolution, keeping up the exponential pace that can be observed all the way from single celled organisms (although you could discuss whether DNA actually has had the time to evolve on Earth).

I find also Kelly’s suggestion intuitive. If you consider spoken language as one of man’s first technological inventions, you could ask if it’s not so intimately linked to the human brain that it could be regarded as part of the evolution. Spoken language is a grey zone between evolution and technology that high lights the links between them and their dependence on each other — both having a similar nature if you see them as a whole and if you look beyond the molecules and atoms they are made out of.

This leads to a concept that I have been surprised to observe as being hardly mentioned before — The Survival of the Fittest Technology.

It’s the idea that technological inventions obey the same rules as evolutionary steps in nature. Only the most fit (best adapted, best conceived) inventions will reach the market and gain massive support and usage among people and thus survive and be subject to further development, refinement and combination with other technologies.

This idea is intimately linked to what the biologist and researcher Stuart Kaufmann calls the adjacent possible—that new inventions are based on fundamentals and skills already in place–a concept that the author Steven Johnson develops in the book Where Good Ideas Come From, The Natural History of Innovation (2010):

“The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself.”

Some of the adjacent possibles are inherently strong and more fit than others. When you already have the telephone, the idea to make it cordless and then mobile is so natural and strong that it just cannot avoid being realized. Different details in the development of the mobile phones are equally exposed to the survival of the fittest, defining the path to a robust and useful technological solution.

What you could ask, and what has been discussed by several people, is whether there are multitudes if different paths that evolution and technology development could take, or if the adjacent possible and the survival of the fittest have so strong inherent patterns that there’s basically only one way with small variations. This would mean that if we would replay everything from the Big Bang, the result would be essentially the same.

Kevin Kelly also discusses along these lines. He suggests that there’s a third driving mechanism behind evolution, besides random changes/mutations and natural selection/survival of the fittest. The third vector is structure, inevitable patterns that form in complex systems due to e.g. physical laws and geometry.

He then proposes that technological development is based on a similar triad where the natural selection is replaced by human free will and choice.

I like this link between evolution and technology. But I believe that it’s the random change that is replaced by, or at least mixed with human free will and choice. Accidents and random changes happen, but the function of mutations in nature would largely correspond to human’s intentional design of technology, changing different aspects at will.

My point is, however, that natural selection is not replaced by human choice. It is as present in technological development as in biological evolution. Although the survival of the fittest technology is a result of human choice and free will, it’s a sum of many individuals’ choices, a collective phenomenon, that is not possible to control by any single mind. 

And therefore Survival of the Fittest Technology appears to be what survival of the fittest is in nature–an invention/species being exposed to a complex and interacting environment where only the best conceived and best adapted thrive.

Issue number 3 of Next Magasin focusing on cyborgs

Next Magasin 3

Next Magasin 3

We just released a fresh issue of the Swedish forward looking digital magazine Next Magasin, for which I am the managing editor, this time focusing on cyborgs. The main feature reportage by journalist Siv Engelmark is a fascinating journey through the aspects of our use of technology to make humans into something more than humans.

Engelmark has been talking to cognition scientists, litterateur scientists, philosophers, pioneers and futurist, trying to find out what the consequences of human enhancement are and what people think of it.

Next Magasin 2

Next Magasin 2

Apart from the feature story there are several interesting pieces on subjects such as electronic blood, biomimetics with bumblebees, brain controlled vehicles, space buildings on Earth, sensor swarms, disruption of healthcare, synthetic biology, teleportation and more.

The magazine is subscribed and can be downloaded as an app for IOS or Android, or to be read on pc.

By the way — I forgot to post the release of issue number 2 back in June 2013. Journalist Peter Ottsjö wrote an eye opening feature story on virtual worlds which are not, as you may think, just some old rests of Second Life.

Instead, virtual worlds are today developing into a rich series of opportunities for both professionals and consumers, and they’re bound to take a larger part in our life than most people realize, bringing significant changes to our way of living.

On the front page you can see the Japanese mega star Hatsune Miku who is all virtual — a virtual synthesizer voice for which fans can write songs, being performed by a projection of the virtual artist on real arenas with thousands of people watching the performance.

In a decade or two, the physical world will just be a sub set of our lives.

Here are three good reasons to have a look at cold fusion

Stanley Pons and Martin Fleischmann with their reactor cell.

Stanley Pons and Martin Fleischmann with their reactor cell.

Ever since Martin Fleischmann and Stanley Pons presented their startling results in 1989, claiming that they had discovered a process that generated anomalously high amount of thermal energy, possibly through nuclear fusion at room temperature, cold fusion has been rejected by the mainstream scientific community.

For anyone open to believe the contrary, here are three good reasons (remember that cold fusion would be a clean, inexpensive and virtually inexhaustible energy source that would use a gram of hydrogen to run a car for a year):

1. Lessons from cold fusion archives and from history.
A comprehensive outlook on the field of cold fusion, including references to papers with specific instructions for anyone who would like to reproduce the Fleischmann and Pons effect (explaining why it is so difficult).  Presented at the cold fusion conference ICCF-18, 2013, by Jed Rothwell who runs — an online library with documents and papers regarding cold fusion.

2. The Enabling Criteria of Electrochemical Heat: Beyond Reasonable Doubt
A paper from 2008 by Dennis Cravens and Dennis Letts, indicating four criteria for reproducing the Fleischmann and Pons effect. Cravens and Letts had gone through 160 papers concerning generation of heat from the F&P effect, and found four criteria correlated to reports of successful experiments, whereas negative results could be traced to researchers not fulfilling one or more of those conditions.

3. A brass ball remaining four degrees warmer than another.
An elegantly designed experiment by Dennis Cravens, performed recently at NI Week 2013, where two brass balls were resting in a bed of aluminum beads at constant temperature. Yet, one of the brass balls, containing another kind of experimental set-up with similar materials as in Fleischmann’s and Pons’ experiment, remained four degrees warmer than the bed and the other ball, with no external energy input. This is not a replication of the F&P effect, but indicates that the process can be implemented in different forms (gas loaded instead of electrolysis).

Please add a comment if you have any other comprehensive and convincing document to suggest, regarding cold fusion or LENR (Low Energy Nuclear Reactions).

Update on Defkalion’s reactor demo in Milan

(This update comes a little bit late, I apologize for that).

Defkalion's reactor enclosed in ceramics and a metal casing. In the background Alex Xanthoulis and John Hadjichristos. Photo: Mats Lewan

Defkalion’s reactor enclosed in ceramics and a metal casing. In the background Alex Xanthoulis and John Hadjichristos. Photo: Mats Lewan

Defkalion’s reactor demo in Milan in July has been discussed extensively. A series of concerns have been raised, among them for the flow measurement not being accurate and for the flow of steam output into the sink being weaker than what could be expected.

Regarding the steam flow I already said that I regret not having opened the valve leading straight down towards the floor (the one we used when calibrating the water flow) to get a visual observation of the steam flow. I have later understood that others have asked to do the same thing but that Defkalion declined, arguing that opening that valve would disturb the equilibrium in the system.

After the demo I sent a couple of follow up questions to Defkalion’s chief scientist, John Hadjichristos, and I would like to share his answers here.

Mats: A Faraday cage only shields from electric fields, not magnetic fields. Can you discuss further how the strong magnetic fields you mentioned, reaching 1.6 Tesla, were shielded?

Hadjichristos: First of all we wish to clarify that the reported magnetic anomalies values relate to peak measurements. Shielding of such “noise” is done using mu metal materials and solenoids during tests having the declared objectives as in the protocol submitted to ICCF18. I apologize for the technically not correct use of the terms “cage” or “Faraday cage” as used in our internal lab jargon.

From a reader: At the time from 21:10 till 21:33 the output temp raised from 143°C to 166°C. But inner reactor temp was all the time constant at 355°C-358°C and coolant flow was 0,57 – 0.59 liter/min also constant. Is there any explanation for this phenomenon?

Hadjichristos: When coolant is in dry steam condition, flow is not constant. A pressure barrier within the coil surrounding the reactor creates flow flactuations that result to such ‘strange’ thermal behavior of coolant during the aforsaid period, srongly related also with stored energy in reactor’s metals. This can be easily explained noting also:

As I explained live during the demo, the flow measurement algorithm in our Labview software uses the slope (first derivative) of the plot of the  reported fn pulses from the flow meter and not the n/(1/f1+1/f2+…+1/fn) or the more common in use (f1+f2+…+fn)/n methods, as the later are very sensitive leeding to huge systematic errors and wrong calorimetry results due to such fluctuations  when occurred. The consequence “cost” of the method we use is the delay on the reported values on screen,  which obviously does not influence  the total  energy output calculations with any “noise” as all fn values are used, whilst all thermometry measurements  are “quicker”  reporting “on screen”. All such 3 flow calculation methods from the flow meter’s signals give indentical instant flow measurement results only when f1=f2=…=fn aka when no steam pressure blocks water to flow from the grid smoothly.

Thanks to your reader bringing up this, not very much commented/analyzed in blogs,  issue on flow wrong algorithms in use in similar calorimetry configurations.

Mats: Could you tell me which other external persons/validators were supposed to come and why they didn’t come?

Hadjichristos: No.

Mats: The sink where the steam was output, was it a normal sink with an open hole in the bottom leading to the ordinary drainage network, or was there any active venting, e.g. a fan, drawing gas down the sink? Could you also tell me the inner diameter of the steam outlet tube?

Hadjichristos: There was not any active venting to or in the drainage. The output pipe driving the steam to the drainage network was a 1/2″ diameter cooper pipe (not thermal insulated after the Tout thermocouple) whilst the PVC  drainage pipe diameter was 2″. Cold water was flowing into the drainage hole from a water supply to protect the PVC drainage pipe from melting.

– – – –

Finally I would like to share some photos from the demo (click on the images for larger view).

The reactor with the metal casing open. Photo: Mats Lewan

The reactor with the metal casing open. Photo: Mats Lewan

The reactor chamber of a reactor not in use. Photo: Mats Lewan

The reactor chamber of a reactor not in use. Photo: Mats Lewan

Another reactor, not used during the demo. Photo: Mats Lewan

Another reactor, not used during the demo. Photo: Mats Lewan

Tubes supplying hydrogen or argon gas to the reactor. Photo: Mats Lewan

Tubes supplying hydrogen or argon gas to the reactor. Photo: Mats Lewan

The insulated outlet tube from the reactor at the thermocouple measuring outlet temperature. The valve with the red handle is open, letting water/steam flow upwards and eventually into the sink. Photo: Mats Lewan

The insulated outlet tube from the reactor at the thermocouple measuring outlet temperature. The valve with the red handle is open, letting water/steam flow upwards and eventually into the sink. Photo: Mats Lewan

The high voltage generator. Photo: Mats Lewan

The high voltage generator. Photo: Mats Lewan

Specification label on the high voltage generator. Photo: Mats Lewan

Specification label on the high voltage generator. Photo: Mats Lewan

The vacuum pump used to degas the reactor between control run and active run. Photo: Mats Lewan

The vacuum pump used to degas the reactor between control run and active run. Photo: Mats Lewan

Spark plug inserted into reactor not in use. According to John Hadjichristos an ordinary spark plug, as opposed to an "other type of heavily modified spark plugs that we use in our plasma subsystem configuration." Photo: Mats Lewan

Spark plug inserted into reactor not in use. According to John Hadjichristos an ordinary spark plug, as opposed to an “other type of heavily modified spark plugs that we use in our plasma subsystem configuration.” Photo: Mats Lewan

Thinking, fast and slow, pattern recognition and super intelligence

Thinking_250pxThis summer’s reading has been Thinking, Fast and Slow by the Israeli-American psychologist and winner of Nobel Memorial Prize in Economic Sciences, Daniel Kahneman.

Great reading (although a little heavy to read from start to end in a short time).

The book describes the brain’s two ways of thinking — the faster, more intuitive and emotional ‘System 1’, as Kahneman calls it, which incessantly interprets impressions and makes associations, and the slower, consciously controllable and more rational ‘System 2’ which resolves problems, allows us to focus and to control ourselves, but that also requires a significant effort when activated.

The message of the book is that we tend to rely too much on human judgment, frequently based on intuitive conclusions served by System 1 — conclusions that System 2 often lazily accepts, instead of activating itself to assess them rationally.

For me, however, the book brought a couple of other thoughts. One was that System 1’s constant search for patterns and recognition is reminiscent of an idea of what ​​the basic algorithm of the brain’s way of working could be.Author and entrepreneur Ray Kurzweil has suggested that pattern recognition is what the brain is in fact engaged in, at levels from dots and dashes all the way to abstract phenomena like irony.

Kurzweil presents this idea in his book How to Create a Mind (2012) and he calls it the Pattern Recognition Theory of Mind (read more in this earlier post where I also note that Kurzweil is now Director of Engineering at Google, working with machine learning).

I was also struck by the idea that the image of the two systems could help when trying to imagine what super-intelligence might be like (which I discussed in this post). Supposing that machines will one day, not too distant, achieve human intelligence and consciousness, which I believe is reasonable (by the way — have a look at this research in which an AI system was IQ-tested and judged to have the intelligence of a four-year-old), then they will soon after become super intelligent, although that might be difficult to comprehend.

But try to imagine the associative power of System 1, constantly tapping into years of experience of different patterns, phenomena, objects, behaviors, emotions etc, and then imagine having the same kind of system tapping into much larger quantity of stored data, performing associations at significantly higher speed.

Then imagine a System 2, being able to assess the input from such a System 1 on steroids, being able for example to perform multi dimensional analysis — i.e. the same kind of classic sorting we do when we picture a phenomenon in four quadrants with two axes defining two different variables (like this one), although a super intelligent System 2 would do the same thing with a thousand variables.

Such an intelligence would probably have forbearance with our limited capacity to see the whole picture, but hopefully it would also have sympathy for our capacity to enjoy life with our limitations.

Comments on Defkalion reactor demo in Milan

Yesterday I participated as an observer at the Greek-Canadian company Defkalion’s demo of its LENR based energy device Hyperion in Milan, Italy. The device is just like Andrea Rossi’s E-Cat, loaded with small amounts of nickel powder and pressurized with hydrogen, and supposedly produces net thermal energy through a hitherto unknown process that seems to be nuclear (LENR stands for Low Energy Nuclear Reactions).

Defkalion used to be a commercial partner to Rossi until an agreement was cancelled in August 2011 (read more at Ny Teknik here).

The demo was the first public (apart from a short pre-run on Monday) from Defkalion that since 2011 claims to have developed its own core technology.

My general impression is that it’s a process that is similar to what I have seen at Rossi’s demos. If you believe the values presented, it produces thermal power in the order of kilowatts from a very small amount of fuel. Although Defkalion has a somewhat different method to control the reaction, it still seems be a delicate thing to get it to work well without stopping or run away.

I believe we will get some reliable answers on the validity of Defkalion’s and/or Rossi’s technology during this year.

At Defkalion’s demo I was asked to verify calibrations and measurements just before the start of the demo, although I had not been prepared for this. Yet, here are a few more detailed considerations:

– the demo was set up in the lab of Defkalion’s Europe office, and thus under complete control by Defkalion. All instruments and sensors were Defkalion’s.

– as far as I could verify there were no hidden wires or energy sources. I cannot completely exclude it, but my general impression was that of a fairly transparent implementation. I was offered to check anything except inside the reactor, also to cut cables (although I never did this).

– all values were collected with National Instruments’ Lab View.

– input electric power was also measured by me with a Fluke True RMS Clamp Ampere meter (Defkalion’s) and a standard Voltage meter (my own). Electric energy was input through two variacs — one for seven electric resistors connected in parallel inside the reactor, and one for a high voltage generator, feeding sparks through two modified spark plugs. I measured both before and after the variacs.

– output thermal power was calculated through water flow and delta T of the water cooling the reactor ((Tout – Tin)*4,18*water flow in grams/second).

– a control run was performed with argon instead of hydrogen, which showed no excess power. Calibration of the water flow was done and controlled by me during the control run and showed that the real water flow was a few percent greater than what was showed in Lab View.

– an issue was detected as Lab View showed an input electric energy to the high voltage generator of between 200 and 300 watts, whereas I measured an input electric energy to the HV generator of between 1,0 and 1,3 kW. We never found out what this issue depended on.

– in the active run with hydrogen the output thermal power reached about 5,5 kW whereas the total input power was about 2,7 kW, taking into account the higher value of the power fed into the HV generator.

– Defkalion had expected to reach a higher output power but admitted that it was a problem degassing the reactor only an hour after the argon run. The process is supposedly very sensitive to small amounts of other gases than hydrogen.

– no consideration was taken to vaporization enthalpy. Yet the temperature at the output reached over 160 degrees Celsius with and open ended output tube, thus basically at atmospheric pressure. The output was led down into a sink. Initially water was pouring down, but at high temperatures there was no water dropping at all. If all the water was vaporized, the output thermal power would have been above 27 kW.

– the hydrogen canister seemed to be a standard commercial canister containing ordinary hydrogen — no deuterium.

– I could detect no DC voltage or current at any point. The Fluke clamp meter was capable of measuring DC.

UPDATE: I forgot to say that according to CTO John Hadjichristos there are HUGE magnetic fields inside the reactor as a result of the reaction, in the order of 1 Tesla if I remember right, possibly due to extremely strong currents over very short distances. Hadjichristos says the field is shielded by double Faraday cages, probably the reactor body and the external metal cover outside the heat insulation.

UPDATE 2: Since I have been asked if I can exclude that hydrogen was fed into the reactor during the experiment I have to admit that I didn’t check that the valves were closed. Bear with me.

And some statements/claims from Alex Xanthoulis, president of Defkalion:

– Collaboration is on going with six companies for development of particular applications. Several of these companies are among the 10 major companies in the world. Concerned applications are: UAVs, computers, water boilers, electric power generation, green houses, ship propulsion (managed by Defkalion), automobile, water desalinization/purification (non profit organization) and big turbines.

– Agreements for licensing of manufacturing of a consumer product — the Hyperion — is signed with companies in Italy, France, Greece (Defkalion 50%), Canada and South Africa. 1,300 companies in about 78 countries are interested. The license price has previously been EUR 40.5 million.

– Defkalion has no external investors so far. Principal owner is Alex Xanthoulis.

Prof Sergio Focardi dead at 80

Prof Sergio Focardi

Prof Sergio Focardi

Sergio Focardi, emeritus professor in experimental physics at the University of Bologna, passed away the night between Friday 21st and Saturday 22nd of June, 80 years old, after a long illness.

Focardi was born on July 6, 1932, in Firenze, Italy. He was Dean of the Faculty of Mathematical, Physical and Natural Sciences from 1980 to 1989 and was chairman for the education i data sciences at Bologna University’s campus in Cesena from 1992 to 2000. For many years he was a leading member of the Italian Physics Society and he was also president for the department of Bologna of the Italian National Institute for Nuclear Physics.

During the 1990’s Focardi did research on LENR together with physicists Francesco Piantelli and Roberto Habel. The three researchers studied a system of nickel and hydrogen in which they detected an anomalous thermal energy release.

LENR is a general term for effects of anomalous thermal heat production, i.e. an energy release which is orders of magnitude greater than those possible from chemical reactions, although it is not identified as any presently known nuclear reactions such as fission or thermal fusion. Such reactions are also connected with the term cold fusion.

Since 2007 Focardi worked as scientific adviser to Andrea Rossi, inventor of the energy device E-Cat, which is also based on nickel and hydrogen.

As a reporter of Ny Teknik I interviewed Focardi in February 2011. He then motivated a semi public demonstration of the E-Cat:

“When you achieve results it is gratifying to spread the word on them. Moreover, I am 78 years old and cannot wait that long.”

The funeral of Focardi was held on Monday 24th of June in Bologna.

Focardi is survived by his wife, his son and two daughters.

Update of Swedish-Italian report, and Swedish pilot E-Cat customer wanted

The report on energy measurements on the E-Cat by a Swedish-Italian group of scientists has been updated with an appendix explaining more in detail the measurements of input electric power. The new version of the report can be found here.

It has been discussed whether a DC current could have been drawn through the power supply to the control box of the E-Cat, without being detected by the instruments, and thus feeding undetected power into the E-Cat.

The new appendix gives a clearer picture of how the electric measurements were done. Both voltage and current were monitored. Since a DC current through a load would have resulted in a DC voltage, this would have been detected by the measurement instrument as an offset of the AC voltage sine curve.

However, it’s not clear from the specifications of the instrument — the PCE-830 Power Analyser — if it can detect DC Voltage. I will investigate this issue further.

UPDATE: I have been in contact with a representative of PCE Instruments UK Ltd who has confirmed that the PCE-830 cannot detect DC tension. When connected to an AC source with an offset DC tension it will display the graph of the AC tension correctly but it will not detect the offset DC tension.

Pilot Customer Wanted

Today the Swedish-British company Hydrofusion, which has a commercial licensing agreement with Andrea Rossi regarding the E-Cat, stated that it is looking for a pilot customer in Sweden for a 1 MW E-Cat plant.

According to Hydrofusion the intent is to make the 1 MW plant available for the customer who will only pay for the (thermal) energy consumed. Installation is scheduled to late fall 2013.

Rossi’s 1 MW plant which consists of about 100 E-Cat modules, was originally tested in October 2011, though no independent observers could confirm the measured energy output.

Rossi claims that he has designed a new 1 MW plant for an unknown U.S. customer and business partner, and that it will be shipped to this customer within short. It should be made available to a customer of this customer who will only pay for the consumed energy, as planned in Sweden.

Criticism, praise and comments on the Swedish-Italian E-Cat report

An earlier critic of energy measurements on the E-Cat so far, Swedish nuclear physicist Peter Ekström, has published his comments on the recent Swedish-Italian report on indications of anomalous heat production in the E-Cat.

Ekström’s comments, which can be found here, focus on a number of issues, ranging from calibration of the input power measurements and the method of thermal output power measurement, to implementation of the null test (running the reactor without fuel) and also an alleged non independence of the authors of the report.

He concludes:

“If the E‐Cat does indeed function as Rossi claims, this would require radical changes in nuclear physics as we know it today (Coulomb barrier, primary gammas, decay of radioactive isotopes). The evidence provided in the report falls far short of indicating that this is the case.”

I obtained a brief answer from Professor Bo Höistad, representative of the authors of the report:

“I would recommend a thorough reading of our paper in which several of Ekström’s questions are answered. Also to be noted is that this is the first test that produced sufficiently interesting results to motivate continued work with further experiments to verify or challenge the results achieved so far. This is the normal procedure in physics when unexpected results occur. There is still much work to be done before we can definitively determine if Rossi’s E-Cat works. We intend to continue this work in the next step.”

I would also like to note some other comments to the report.

Several fierce critics hide behind anonymous pseudonyms, which I believe is unfortunate. An example is this lengthy post by the pseudonym Joshua Cude, arguing that the report is ‘yet another unrefereed, sub-par cold fusion claim to add to the pile of unrefereed sub-par cold fusion claims.’

A few methods for scam have been suggested, among them power input through hidden wires inside the power supply cables thus fooling the clamp ampere meter, hidden power supply with frequencies above the limit of the power measurement analyzer, and hidden power supply through direct current, DC, possibly not detected by the clamp ampere meter.

As for the DC hypothesis, Torbjörn Hartman, co-author of the report has stated:

Remember that there were not only three clamps to measure the current
on three phases but also four connectors to measure the voltage on the
three phases and the zero/ground line. The protective ground line was not
used and laid curled up on the bench. The only possibility to fool the power-
meter then is to raise the DC voltage on all the four lines but that also means
that the current must have an other way to leave the system and I tried to find
such hidden connections when we were there. The controll box had no con-
nections through the wood on the table. All cables in and out were accounted
for. The E-cat was just lying on the metal frame that was only free-standing on
the floor with no cables going to it. The little socket, where the mains cables
from the wall connector where connected with the cables to the box and
where we had the clamps, was screwed to the wood of the bench but there
was no screws going through the metal sheet under the bench. The sheet
showed no marks on it under the interesting parts (or elsewhere as I
remember it). Of course, if the white little socket was rigged inside and the
metal scews was long enough to go just through the wood, touching the
metal sheet underneath, then the bench itself could lead current. I do not
remember if I actually checked the bench frame for cables connected to it
but I probably did. However, I have a close-up picture of the socket and
it looks normal and the screws appear to be of normal size. I also have
pictures of all the connectors going to the powermeter and of the frame
on the floor. I took a picture every day of the connectors and cables to
the powermeter in case anyone would tamper with them when we were

I lifted the controll box to check what was under it and when doing so I
tried to measure the weight and it is muck lighter than a car battery. The
box itself has a weight, of course, and what is in it can not be much.

All these observations take away a number of ways to tamper with our
measurements but there can still be things that we “didn’t think of” and
that is the reason why we only can claim “indications of ” and not “proof
of” anomalous heat production. We must have more control over the
whole situation before we can talk about proof.

My understanding is that a DC current through a load inevitably results in a DC tension over this load which should be detected by the measurement described by Hartman.

By the way, it has been noted that Hartman has a PhD in medical science. I have been told that the reason is that Hartman’s subject, Radiobiology, was given at the faculty of medicine at the time of his research studies. However, he has basically no studies in medicine but graduated in engineering.

It should also be noted that the degree “civilingenjör” in Swedish is a generic term for Master of Science degrees in engineering, and does not correspond to civil engineering.

Among the authors of the report, everyone except Evelyn Foschi has a PhD.

Some comments on the measurements from another of the co-authors, theoretic physicist Hanno Essén from the Swedish Royal Institute of Technology, can be found on

An interesting analysis is made by engineering consultant David Roberson who has had a look at the temperature curve over time compared to input electric power. Roberson draws the conclusion the E-Cat must be operated close to a point where run away could occur and that a Coefficient of Performance, COP, of about six, more or less follows from a model he has built from the data.

Earlier analyses by Roberson of measurement data on the E-Cat can be found in my article in Ny Teknik here.

Another comment by Prof Bo Höistad has been published in a comment to a post by Mark Gibbs in the Forbes, and is referred to here. For anyone who would like the original wording in Swedish I have obtained it from Höistad:

1) All input effekt var under full kontroll .

2) Ingen dold energi källa i stativet

3) Den här frågan är bra att du ställer.

I fysiken så kan vi inte ha tro eller magkänska för om ett fenomen uppträder eller inte. Vi måste ta reda på vad som faktiskt föreligger genom noggranna mätningar. Som kärnfysiker kan jag dock direkt säga att, baserat på välkänd kunskap om kärnprocesser är sannolikheten för nukleära omvandlingar som orsak till värmeproduktionen i E-cat försvinnande små. Dessutom om sådana av okänd anledning ändå skulle äga rum skulle de lämna spår efter sig, vilka inte har observerats än så länge.

Vi har velat undersöka om Rossi’s påstådda värmeproduktion kan verifieras i en oberoende mätning. Det första resultatet är att vi fått en indikation på att en värmeproduktion faktiskt inträffar som inte kan förklaras med någon kemisk process. Hur värmeproduktionen går till är höljt i dunkel. Resultatet är givetvis mycket dramatiskt och måste absolut verifieras ytterligare innan några definitiva utsagor kan göras. Vi avser att göra det i ett nästa steg.

Det återstår mycket arbete kvar innan det går att avgöra om Rossi’s E-cat fungerar. Resultaten hittills är tillräckligt intressanta för att fortsätta det arbetet.

Finally some praise for the report. Jed Rothwell who runs — a library of papers about cold fusion — calls the paper a gem, highlighting the conservative assumptions by the authors.

Two 100 hour scientific tests confirm anomalous heat production in Rossi’s E-Cat

Glowing HotCatA group of Italian and Swedish scientists from Bologna and Uppsala have just published their report on two tests lasting 96 and 116 hours, confirming an anomalous heat production in the energy device known as the E-Cat, developed by the Italian inventor Andrea Rossi.

The report is available for download here and on

I have earlier reported extensively on the E-Cat in the Swedish technology magazine Ny Teknik, but since more than a year very little new verified information have been available. This looks different.

The conclusion of the report is that the heat production is orders of magnitude beyond any conventional chemical energy source, beaten only by nuclear based power sources. Yet the scientists have systematically made conservative assumptions in order to base the result on a worst case scenario.

“Even by the most conservative assumptions as to errors in the measurements, the result is still one order of magnitude greater than conventional energy sources.”

In the tests, about 5.6 and 2.6 times the input energy was produced respectively (COP). An hypothesis for the lower value in the second test is that it might be explained by a lower working temperature , on average 302 °C against 438 °C in the first run.

In the second test an identical  dummy reactor without fuel charge was run with the same experimental set-up and found to produce no excess heat.

In their report the scientists also describes a third test when the reaction went out of control and destroyed the reactor. Through the reactor tube made of ceramics and steel they could observe two red heat sources where the fuel charges supposedly were located (see picture above). The heat was so intense that the coils of a number of electric resistances that were being used to start the reaction could be seen as shadows against the glowing red light.

Another observation regards the shape of the rising and falling temperature curve, clearly indicating an active heat source which doesn’t behave as an electric heat source, but instead as an accelerating reaction.

Throughout the tests no significant radiation above ambient background could be detected.

The reactor used in the test was called the E-Cat HT, where HT stands for High Temperature. It’s also known as the Hot Cat and is a development of an earlier model that reached about a 100 degrees Celsius. In both models the fuel charge consists of a small amount of hydrogen loaded nickel powder plus some unknown additives.

The tests were performed in Andrea Rossi’s premises in Ferrara, Italy, in December 2012 and March 2013.

The authors of the report are Giuseppe Levi, physicist, Bologna University, Evelyn Foschi, Bologna, Torbjörn Hartman, Radiation protection responsible at the Svedberg Laboratory, Bo Höistad, professor of nuclear physics, Roland Pettersson, Lecturer in Physical and Analytical Chemistry and Lars Tegnér, physical chemist and former development director at the Swedish Energy Agency, all representing Uppsala University, and Hanno Essén, assistant professor and theoretical physicist at the Royal Institute of Technology in Stockholm.

A longer test of the same device lasting for about six months is planned to be made later this year.

We plan to publish a follow-up report with comments in Ny Teknik soon.

Update: Here’s our report in Ny Teknik with comments from Professor Bo Höistad (in Swedish only).

An upcoming opportunity – what to do with dead shopping malls

shopping mall_300pxHere’s an great upcoming opportunity for anyone who is creative and wants to be at the forefront when the timing is right: What to do with all those huge shopping malls that within a decade will remain empty and outdated.

Taking into account that these enormous volumes are still being built all over the world underlines the need of creative solutions.

If you don’t believe me, here’s why you should.

I already outlined how a number of industries will be hit by digital revolution in the same way the music industry was. Media, education, healthcare, transportation, law and finance/trading are all undergoing a fundamental change as the content is being digitized and possible to handle in completely new ways, opening up series of radically new business models.

Retail is another area which is being hit by the same development.

Online commerce is already taking over a substantial part of retail in physical stores. With the progress of virtual showrooms and fitting rooms, and of virtual reality with augmented possibilities for sensing structures and materials, this development will accelerate.

Another part of retail will be taken over by the growth of 3d-printer technology, making it possible to produce single items at an progressively increasing quality and resolution at home.

As we will spend more and more time in virtual environments — or in the metaverse — our interest in buying physical items will get competition from our interest in buying virtual items in various virtual worlds. The reason is simple: Exactly in the same way that you want to make a good impression in the physical world, you will want to do it in the virtual world, be it clothes, objects, real estate, interior design or other products.

Spending real money for virtual items might seem absurd to many people, but keep in mind that the cost of manufacturing and the physical material in products such as clothes is already reduced to a minimum of the total cost today. What we pay for is design, brands, packaging, storage, retail space and distribution, and of these we give most value to design and brands.

Eliminating manufacturing and the physical material is in the end a minor change, when you think about it, and we will still be inclined to pay for the value of design and brands (I don’t discuss here whether this is sound or not, I just believe this is the way we will behave).

Putting all this together, the future of physical retail in shopping malls look quite bad. Non existent, to be honest.

So — start dreaming!

What would you like to suggest as a new way of using the enormous empty spaces of dead shopping malls?

A kind of modern castles with huge halls and rooms for spacious urban living — all heated by sustainable solar power?

Urban farming, zero kilometer, ecological and highly automated, possibly vertical if the shopping mall was a sky scraper?

Hangars for fleets of autonomous aerial vehicles for urban transportation?

Don’t wait to start inventing! These ideas might be worth investigating sooner than you think!

Suppose Google plans to create a mind

How-to-Create-a-Mind-cover-259x381 (1)I’m reading the latest book by Ray Kurzweil — How to Create a Mind (2012). In his book Kurzweil pulls together different  pieces of cutting edge brain research and puts them in the context of his own experience of developing technology for voice understanding and character recognition.

The result is the Pattern Recognition Theory of Mind, PRTM. It might seem surprisingly simple, but I must admit that I find it very attractive and plausible, bearing in mind that nature always finds clean, perfectly optimized and straight forward solutions. And the PRTM is such a solution — a highly flexible structure that is relatively easy to grow biologically but which permits an incredibly complex development of the mind.

What Kurzweil basically argues is that the main algorithm in the neocortex — the outermost layer of the mammalian brain which is most developed in the human brain and contains all higher functions — is one single algorithm with the only purpose to recognize patterns.

These pattern recognition modules — there would be a few hundred million of them in the neocortex — are then organized in a hierarchical structure where the lowest modules (in an organizational meaning, not physical as they’re all positioned in one single layer) recognize very basic pieces of patterns such as contours and small pieces of sound, and the highest represent concepts such as irony and aesthetics.

From this starting point Kurzweil elaborates the whole process of how the mind learns and develops, and he also outlines how an artificial mind could be created.

I’m confident Kurzweil is actually in a good position to move forward such a project. And here’s the thing: in the middle of December 2012 Kurzweil was hired by Google as Director of Engineering.

Given the strong engineering culture at Google, its interest for cutting edge engineering projects such as the autonomous car, I wouldn’t be surprised if Google would give the endeavor to create an artificial mind a shot, with Kurzweil in charge as director of the project.

It could be worth to note a similar but kind of opposed project — the Human Brain Project which just got financed with half a billion euros by the EU. The Human Brain Project is a remarkable venture with its roots in the Blue Brain Project led by Henry Markram at EPFL in Lausanne, Switzerland, with the scope to build a complete computer based simulation of the human brain.

The fundamental difference between the two projects is that whereas Kurzweil has a top-down approach, analyzing which are the possible algorithms that define the brain’s way of processing information and creating a computer based representation at a progressively more detailed level, The Human Brain Project’s approach is bottom-up, modelling individual neurons and their connections and eventually arriving at a complete simulation of the brain as an organ.

Both approaches will have their strengths and weaknesses, and maybe we’ll see them meet half way.

Imagine hi res trading replace crowdfunding and VC:s

TradingOne main aspect of digital revolution is that it opens completely new business models and mechanisms in an industry, such as Spotify’s model with streaming music as opposed to downloading and owning music files or even cd:s.

In an earlier post I selected six industries that will be hit next by digital revolution, and one of them is finance and trading.

This is already an extremely computerized sector where demand is not focused on economists any longer, but on experts in mathematics, statistics, computer science and programming. The computer systems that are being used are among the most powerful in the world, competing in the nanosecond range on sell and buy orders in automatic trading.

You might conclude that we have already seen the shape of future digital trading. I believe we haven’t.

I already pointed out that what’s going on now is a search for better models which can help outperforming competitors when you cannot beat them with speed and computing power only. The models need to include information on history and future and also people’s and markets’ sometimes irrational behavior, thus being much more complex than a model for physical systems.

But what’s more exciting is trying to understand what happens next.

There’s no reason to believe that automatic trading will go away. On the contrary, it will become progressively more accurate and powerful, with models that are gradually more capable, using technology such as artificial intelligence and increasing amounts of data.

You could compare this with the development of digital cameras with an ever increasing resolution making digital photo capable of capturing finer and finer details of reality.

I expect the same evolution in trading. Automated systems will eventually be so powerful and have such a high resolution that you can have markets with not just a couple of hundred companies, but thousands or even millions.

IPO’s could be handled automatically and any company, no matter how small, could go through this process at any moment, in real time, 24/7.

Meanwhile anyone could invest any amount according to specific criteria and might have a quite predictable return on investment as the number of invested companies will be big enough to create a certain statistical stablilty.

In this way, functions that are taken care of by venture capital and crowdfunding today could be incorporated in automated trading systems.

Instead of turning to a crowdfunding website or a VC firm, a small startup or group could just apply for an IPO in minutes and receive funds according to the sum of intelligent systems’ assessments of the business idea, or possibly even of a non profit project.

I admit that I know little about the theories of perfect markets, but in an intuitive way such a system could very well be described as something that approaches a perfect market, with progressively increasing resolution.

Swedish TV on Andrea Rossi and the E-cat


Andrea Rossi

Tonight Swedish Television, SVT, dedicated 25 minutes of the program “The World of Science” (Vetenskapens Värld) to the Italian inventor Andrea Rossi and his controversial energy device, the E-cat.

The program is online until January 16 on this link: (the part on Rossi starts at 24:30 minutes).

Update: And here’s an English transcript.

I believe SVT reporter Linus Brohult made a good job, balancing different opinions and trying to reach a broad audience.

Interesting quotes he got from physicist Anders Åberg, responsible for research on new energy sources at Vattenfall, Sweden’s major electricity utility and one of Europe’s largest too.

“This is something new, and it’s our responsibility as an energy utility to check out options, of course, and this is an opportunity that perhaps will change the world,” Åberg said.

He said he didn’t believe Rossi is a fraudster.

“He is an entrepreneur, he just goes ahead and he doesn’t care about theories. He just wants to make something that works,” he said. Then he added:

“And we have a problem. We need to replace nuclear power eventually.”

– – – – –

Speaking about cold fusion, or LENR as most people refer to it (although no one still knows what it is exactly, just that it seems to produce excess heat larger than what’s possible from a chemical reaction), I’ll try to give you an update soon on the very interesting work by MFMP — Martin Fleichmann Memorial Project — who’s aim is to make cold fusion scientifically accepted through a new model with open and live experiments on the web, with all data in real time.

MFMP now tries to replicate an experiment by the Italian researcher Francesco Celani which got a lot of attention during 2012.

This stigmatized invention called cold fusion seems to be moving more than ever before. During 2013 I believe we will have more answers on what it can produce. If it works we will have a clean and basically endless energy source that’ll change the world.

A really good post on ethics for machines, robots, cars…

Here’s a really good piece on the difficulty but also the importance of ethics for machines, robots, autonomous cars, arms and similar stuff powered by artificial intelligence: Moral Machines by Gary Marcus, Professor of Psychology at N.Y.U.

Prof Marcus argues that the moment autonomous cars will be so much better and safer than human drivers that we will prefer them from a moral point of view, that moment

…will signal the beginning of another [era]: the era in which it will no longer be optional for machines to have ethical systems.

Then he points out how difficult it will be to build ethical systems — clearly not as easy as implementing the three famous laws of robotics formulated by Isaac Asimov. On the other hand it’s like philosopher Colin Allen puts it: “We don’t want to get to the point where we should have had this discussion twenty years ago.”

The elegant conclusion by Prof Marcus is:

What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.

His piece highlights something I’ve tried to emphasize in an earlier post — when looking at the impressive opportunities accelerating technology development is offering us, it’s of absolute importance to consider also how culture and human values have evolved over time and how fundamental they are in order to build something prosperous from our technology.

It’s easy to forget these values when you understand the dazzling power of technology, but tech in itself will take us only so far. Or indeed nowhere.

What’s the size of an artificial mind?

Our mind is not just the brain — it’s also our body with all our senses and ways of communicating.

Now this is one question that has intrigued me for some time, and I think I have a good and quite simple answer.

I’ve already made it quite clear that I believe that we will be able to create human like artificial intelligence — strong AI — within a couple of decades, and that there will be artificial intelligence far greater than that of humans after that, in every sense — intellectual, emotional, intuitive…

In an earlier post I also tried to draw a picture of what super intelligence would be like — with a human like pattern recognition and associative capacity but at a much larger scale, with tons of information from crowds of sensors, and huge amounts of stored data in all forms — big data.

But what I’ve found difficult to understand is how large such an artificial mind could be. Or what would actually delineate it.

Some expect a human like mind to arise from internet’s complexity, and you could get that feeling reading about the fascinating photo journalism project The Human Face of Big Data.

During research for the project, photojournalist Rick Smolan was told byMarissa Mayer, now Yahoo’s CEO, that big data is ‘like watching the planet develop a nervous system.’

Smolan himself later said ‘It’s like watching the planet wake up.’

Even though I doubt that an artificial mind will arise without us actively creating it, the question is whether it could really become that big. One global mind extending itself everywhere on earth.

Or whether it would be conceivable with lots of small minds in a computer system, interacting within the system, maybe merging and separating, flowing from one point to another? The Ghost in the Machine.

In a few words — where’s the border of a mind? What’s it’s size? And how do you delineate it?

Well, I believe the answer is quite simple. It all comes down to one thing — which sensors and means of communication it is in control of.

In the end that is what defines a human mind. We’re not just a brain. We’re a body, with all its senses and ways of communicating with the world around us. And a lot of the brain’s activity is about caring for the whole body, listening to it, making sure it feels good. And there’s also the aspect of systems in the body substantially influencing the human mind.

Obviously an artificial mind will also need sensors and ways to communicate. Most probably these sensors and means of communication will interact with the mind and be a part of it. And just like in humans they will be the natural delineation of the mind, defining where it reaches.

Sounds clear? Well, I don’t think it really is.

Because what we might think of is a mind with a fixed number of sensors and communication tools — a mind which grows and develops gradually with its experiences, like humans. But you could also imagine a highly flexible system capable of adapting very quickly to a varying number of sensors and communication tools while keeping a stable conscious core handling all these fluctuations.

What I’m trying to say is that we should not be surprised at anything. It’s just our fantasy that puts limits on how far artificial intelligence will go.

What if we make animals as smart as humans?

Recently a group of reserachers reported that they had designed a brain implant that sharpened decision making and restored lost mental capacity in monkeys.

Even if that’s a very interesting piece of research (you can read NY Times report here) it’s of course lightyears from making animals intelligent.

Still one bizarre idea immediately struck me.

I have no difficulty in accepting the idea that we sooner than many believe will be able to make computers that are as intelligent and conscious as humans, and that we gradually will integrate with these machines in the same way we already integrate with all sorts of medical technology.

In this way we will be able to extend our mental powers and keep following an ever accelerating pace of development.

But if we will be able to do this, we must sooner or later also be able to add intelligence to animals with some kind of brain implants.

Now try to imagine the implications. Let’s say we make a tiger, a lion or a bear intelligent as ourselves. Such an animal will be able to do all sorts of things, communicate with us in a fullblown way and understand how to obtain what it wants.

So what’s the difference between this animal and a humanoid, an intelligent robot? Well, it will be faster and stronger than us, as robots will probabaly be, but the animal will also keep all its strong instincts, designed for survival in a wild environment. Robots will probably have no such instincts.

Or should we imagine that if the animal gains intelligence and self-awareness,  it will be able to understand and control its instincts and adapt its behavior?

I told you it was a bizarre idea.

But some day someone will find out.

Exploring ethics for machines

“If we admit the animal should have moral consideration, we need to think seriously about the machine.”

That’s how Northern Illinois University Professor David Gunkel puts it, discussing whether and to what extent intelligent and autonomous machines that we are devloping can be considered to have legitimate moral responsibilities and any legitimate claim to moral treatment.

In his new book The Machine Question: Critical Perspectives on AI, Robots, and Ethics, he examines these questions, being inspired by the fact that engineers and scientists are increasingly bumping up against important ethical questions related to machines.

“The real danger is if we don’t have these conversations,” Gunkel says, according to, adding:

“Historically, we have excluded many entities from moral consideration and these exclusions have had devastating effects for others.”

Six industries that will be hit by digital revolution

Some people in the music industry still wonder what happened when an established and profitable business model was torn to pieces in a few years. Well, soon a lot more people in several other industries will ask themselves the same thing.

I just outlined six industries that will be hit in the next future in an article in Next Magasin, and I will come to them in a moment. First just let’s have a look on the driving force – the digital revolution.

Often I talk to people who look upon internet as something that has arrived and is more or less settled. Online banking and shopping, You Tube, Facebook, Twitter and Pinterest, news, and entertainment. All within a search query.

If you agree you might be interested to read on. Because what’s important to understand is that this is just the beginning.

The music industry was not a unique case. It just happened to be first for a number of reasons, one of them being that the content was so easily digitalized. Which meant it could be copied and then distributed and shared at no margin cost.

That’s the first driving factor of the digital revolution. And it applies to anything that might be digitalized, which is more than many believe. Some think that even objects should sooner or later be considered as digital information, i.e. the information which is needed to produce them with future advanced developments of 3d printers.

Biology is getting digitalized as we can scan the dna code faster and faster at rapidly decreasing cost. Some people even think that the human mind can be digitalized and thus possible to copy, distribute and share at no cost.

But there are more fundamental forces driving the digital revolution:

  1. Once something is digitalized it can be copied, distributed and shared at almost no cost (I said that).
  2. The internet is a massive distribution machine where you reach the world by pressing a key.
  3. Internet is the perfect tool for sharing.
  4. Internet gives unprecedent possibilities for networking between people with shared interests.
  5. Distribution, sharing and networking makes internet a new tool for production, making possible a new model for global activities besides big corporations and state financed production – work performed by large groups with a shared interest, e.g. Wikipedia – a model that was once useful only locally between friends at a small scale.
  6. Internet is also a mixer. Digitalized content can be mixed in ways that was hardly possible before, like sharing the music you’re listening to in real time on Facebook.
  7. Artificial Intelligence – probably the tool that has the greatest potential to bring revolution to many industries.

These are the forces, and I have identified six industries that are already obviously being hit by them, or that will be soon. I’ll give you the short version, as I could go on for hours about this.

1. Media

Already hit. Content is easily digitalized, and if someone believes that printed media made a mistake to make everything free on the internet I’d say that there was never a choice. Sharing is too easy, and there will always be someone offering content for free.

What might be changing the situation is the introduction of tablets, mainly because of the importance of packaging – both of the product in itself which is slim, elegant and easy to use for the masses, and of the contant which might be served in an attractive way which people seem to be willing to pay for, and at the same time ads might be elegantly integrated. A kind of new/old business model might be arriving.

But of course there’s a huge amount of experimenting to be done, and lots of new models might arrive.

For the tv industry everything is much less certain yet. The packaging and the user interface will change. And the highly fragmented market we see now will be consolidated. Users will win, with more choices, lower price and easier consumption.

2. Education

Education is changing extremely rapidly right now.

Web based initiatives such as Udacity, Coursera, Edx and Knewton at university level and Khan Academy at K12 (ground school and high school) are having huge success. They’re prooving that lectures can be digitalized and copied like music, and that every student in the world can have access to the best institutions in the world.

It’s not just about video recordings of lectures – it’s a lot more advanced, with exercises, team work, forums and more. And this is just the beginning.

One interesting question is where academic research will be done in the future, if universities are hit by consolidation and rapidly shrinking numbers of students.

3. Health care

Even here the changes are huge. Through artificial intelligence, such as IBM’s Watson (that won Jeopardy over humans) which is already being trained for this, doctors will have assistance in natural language, in dialogue with with the patient, to find evidence based diagnognosis and treatment.

In this way, qualified diagnosis might be offered almost for free to many people in the world.

It could be performed by a personal diagnosis device, such as the one being the goal of the Qualcomm Tricorder X Prize, with a 10 milion award to the team that can diagnose 15 illnesses with 30 consumers within three days with such a device.

Health condition could be monitored in real time with sensors integrated in a ring such as the one being developed by Swedish Sense M.

Add tele medecine and the general accelerated progress within medecine to this and you can see how less resources need to be dedicated to treat a lot more patients than today, which is necessary in order to offer qualified health care to everyone in the world.

4. Transportation.

Here the key development concerns autonomous cars. They have already reached much farther then you might think. Google’s cars have traveled over 300,000 kilometers. Increased safety and efficency are huge driving forces.

Sensors monitoring traffic effectively, and systems such as those already integrated in navigators giving real time traffic information, will make new ways of handling logistics possible. Coordination of independent vehicles could be offered virtually, competing with traditional logistic companies.

5. Finance and trading

Automated trading is a fact  and it will not be less important in the future. What’s interesting though is that when it all comes down to a war on nanoseconds, innovative models describing the market place and real business activities will be critical.

For this reason the finance sector is now looking for mathematicians, programmers and engineers. And what’s even more interesting is that consumers are being offered access to the most powerful computerized trading platforms online, where they can try out new innovative ideas on how to model the reality.

Another interesting aspect is the one offered by the Swedish American start-up Recorded Future, indexing information on the internet just like Google but not from a key word perspective but instead from a time perspective.

Analyzing this information Recorded Future can even gain knowledge about the future, which is in turn fed into trading models that have until now been based exclusively on historical information.

6. Law

Law is actually one of the sectors which was first believed to be possible to handle by computers, even before the internet.

Many aspects of the devlopment today are covered by Richard Susskind in his book The End of Lawyers? from 2008. The take away is that many law form believe and act as if what they offer is tailor made, when in reality it is routine work that is easily automated with an amount of artificial intelligence.

Early examples are services such as Online Dispute Resolution.

In the years to come many law services might be offered at a very low price or even for free to people all over the world, many of which will have the possibility to stand up for their rights for the first time.

Susskind’s message is that lawyers will have no future if they don’t understand this change and find new opportunities. New tasks can be to assist in assessing how technology might be used in society, e.g. for security and public safety, without threatening personal integrity.

These are six sectors  which I believe will be hit first, but of course there are several others being strongly influenced by digital technology and the internet. And in the end no sector will go free.

What is even more important is that a large number of new businesses will be born in the cross section of all these sectors, as they become easy to mix when they are digitalized. These opportunities are largely unexplored yet, and are limited only by how far our ideas can reach.

We just launched Next Magasin – on how technology’s changing the world

Front page of the first issue of Next Magasin.

A few days ago we finally launched Next Magasin – a new magazine on how technology’s changing the world (just to be clear, it’s only in Swedish for now).

I’m the managing editor and it’s been a great time to work with the first issue, which is free to download or to read on Ipad/Android/Pc.

It’s been very exciting to shape the magazine from a strong feeling I’ve had in the last few years – a feeling that lot’s of people outside the traditional groups of geeks and tech pundits, are starting to ask themselves what’s going on. They seem to be aware of a big change growing faster and faster, without being able to define what the change is about or what the driving force is.

I’m convinced that this change is technology driven and that it is invisible to many people because they always considered technology as a tool, not as a driving force.

Even though technology development accelerates and becomes more and more intimately present in people’s lives and difficult to overlook, they just don’t seem to be aware of it’s implications.

So my goal with Next Magasin was to start talking about this to a wider group of readers – a kind of magazine that I haven’t seen yet but that I think will start appearing.

One of my favourite examples is digital technology and the internet. Many people I talk with consider internet something that has arrived an is completed. We will just go on using it.

What I try to tell them is that internet barely started. The disruptive force in digital technology is so huge that it’s hard to grasp.

And one of the main articles in the first issue of Next is “Digital Revolution shaking up six industries”.

My point is that the record and movie industries were no unique cases. They just happened to be first because the content was easily digitalized.

The digital shake-up will progressively come to all industries. It starts when a major part of the content produced in an industry is being defined as digital information, as songs and movies was.

Then – as we have seen – copies can be made without cost, information can be distributed over the intrernet, it can be shared, it can be mixed with information from other industries (new disruptive cross-industry services), and it can be handled and analyzed by intelligent algorithms, producing new useful information.

My bet, which I out-line in the first issue of Next, is that this change is now coming to the following industries: Media, Healthcare, Education, Transportation (think autonomous vehicles), Law and Finance/Trading. I would like to share my arguments for this, but that would take several pages, as in the magazine…

Another major article in the first issue focuses on Artificial Intelligence, starting out with IBM Watson – the computer system that beat two human record holders in the quiz show Jeopardy in February 2011, understanding natural languages and finding correct answers to a wide range of clues in  just a few seconds.

The article explains how Watson is now being implemented and trained for use in healthcare and banking, and  what future development in AI could realistically bring us.

I’ve already found out that many readers are surprised at these achievments and eager to learn more about technology as a driving force for change.

I’m also convinced that it’s time to start discussing this among a wider group of people. And to do that, we need to talk a little less about technology in itself and more about its implications and what it is capable of doing.

I aslo believe that this is a sign of a new level of maturity reached in technology, now concerning lots of people as it’s not only a tool but something used for interaction at all levels with other persons.

The discussion is getting urgent because there’s no way of slowing down the pace.

Technology development will keep accelerating, and it’s more important than ever to make people conscious of the importance of shaping technology into something good, something we want to live and interact with, and to identify weaknesses in order to continously improve different aspects of technology as much as we can.

My exciting job will now be to evaluate our readers impressions of the first issue and to find out the right direction for the next issue of Next!

Next Magasin has a page on Facebook and the twitter account @Nextmagasin. If you have suggestions for contributors to the magazine, please let me know.

(Next Magasin is published by Talentum Sweden which also publishes Ny Teknik and the business magazine Affärsvärlden, among other titles).

Forget the digital divide

Tha Digital Divide

The concept of The Digital Divide – the haves and the have nots in the digital era – has been firmly established and taken for granted for at least a decade.

Therefore it was very refreshing to read Christopher Mims’ post in Technology Review, entitled “There’s no Digital Divide”.

It’s basically an interview with Jessie Daniels, Associate Professor of urban public health at Hunter College and CUNY, who recently tweeted with resignation:

Why @nytimes must there be a “new digital divide” ? Why this tired, dis-empowering rhetoric in which the poor are always “doing it wrong” ?

The tweet referred to a piece in NYTimes with a new interpretation of the digital divide – that the have nots now have the internet but are using it in a less clever way, mostly wasting their time.

In Christopher Mims’ post, Jessie Daniels argues that the Digital Divide is based on how middle- and upperclass whites are using the internet.

In reality it turns out that the typical have nots – poor and with low education – use the internet as much as typical middleclass whites do, and in a very constructive way.

And I would argue that even if they use it in another way it might still be productive or effective or creative or what ever you want. The basic power with the internet is that it’s so flexible that it allows anyone to find new ways of using it that haven’t been found before, especially new ways adapted to local conditions.

Christopher Mims then asks Jessie Daniels about her opinion on a federal program for spending $200 million on putting digital educators in schools.

Daniels point is that users with less experience or education still have difficulties with some aspects of digesting information, for example how to identify hidden interests behind websites that seem to be one thing but actually are something else, such as In that aspect the federal program could be useful, if implemeted in the right way.

Another perspective on the Digital Divide is the fact that inventions travel faster over the world. The classic example is to compare how long it took for technology as the television, the mobile phone and the internet to reach a major part of the world’s population.

As this amount of time gradually gets shorter, in an accelerating development, the space for a potential divide between haves and have nots regarding any new technology gets smaller and smaller, having less and less importance.

What is important to understand though is that differences in culture and local conditions make people use new technologies in different ways all over the world, making these technolgies develop in a richer way than ever before.

Watch out for humans pushed to the edge

I just read a piece in Wired Magazine on high school students having debates at 350 words per minute. And it came to my mind that in history of technology it’s a well known fact that when a technology is at its last stages, just before being widely replaced by a better and more versatile kind of technology, it usually gets developed with extreme features – a kind of final sprint.

Examples are the gas stove and the steam engine.

So I figured that humans being pushed to the edge – not just in elite sports but also in other maybe more intellectual areas – should be a possible sign of a better and more versatile technology arriving, replacing the biological brain.

So watch out for humans’ intelligence  being pushed to the edge. And tell me what you observe.

What would it be like to be super intelligent?

The superintelligent alien Megamind from the computer animated movie by Dreamworks Animation.

Have you ever considered the immediate images appearing in your mind when you hear the word super intelligence? Maybe an alien with a huge cranium staring at you… or a computer controlling every step you take…?

Or have you ever tried to imagine what such a super intelligence actually thinks of you?

It might turn out to be difficult, but I think it’s a useful thing to reflect on.

As I have mentioned before, there are good reasons to believe that artificial intelligence by 2045 will surpass the total intelligence of all human brains in the world, both in an intellectual, emotional and moral sense.

That’s a scary prospect in itself, but even though it might be difficult to imagine what this really means, it’s probably even harder to imagine what a super intelligence would be like.

Or what it would be like to be super intelligent.

One reason is that even if some humans are more intelligent than others – and sometimes one individual is more intelligent in one way but less in another – generally speaking all humans are more or less equally intelligent, compared to other animals for example.

So when we think of difference in intelligence, or someone more intelligent than ourselves, we don’t have any reference than a very slight difference in intelligence.

A super intelligence is something completely different, rather like the difference between us and a chimpanzee.

This is actually a crucial point in order to have any idea of what super intelligence would mean for the world’s development, and I believe that most people stop at the word super intelligent without even reflecting on what it represents.

On the other hand I believe that we can get a basic understanding of its properties.

The easiest way to start is to have a look at powerful computer systems today. In the last decade they have become impressively good at analyzing enormous quantities of data – often called Big Data.

This happens all around us. Banks are continuously monitoring transaction data from credit and debit cards in order to discover attempts at fraud, and often they can prevent your card details from being used by someone else in a matter of seconds.

The same goes for mobile network operators, monitoring calls and transactions made with mobile phones.

Google introduced its Flu Trends in 2008 – a website giving accurate information on flu activity in real time in over 30 countries, based on patterns in masses of certain flu related web searches fed into an algorithm developed by Google.

Big Data is a new gold mine with a vast number of opportunities not yet discovered. Its potential is being investigated by both private companies and public organizations, such as the UN through the initiative Global Pulse.

Computer systems analyzing Big Data are in some sense similar to humans when it comes to discovering patterns and trends in information.

Pattern recognition is actually one of the human brain’s most characteristic strengths, used both for recognizing known objects or faces in images, or words in the sound of spoken language, in milliseconds.

The difference with computer systems is of course that they are immensely much more capable than humans of grasping enormous quantities of unstructured data and finding patterns and trends in that ocean of data.

This capability is already in place, and it will only get stronger in the years until 2045. Reasonably it will then include capability in sorting out patterns in all kinds of data from all kinds of sensors – and thus not only numbers and transactions but also sounds, images, videos, radio waves, movements, temperatures, chemical concentrations on so on.

Now try to imagine this capability combined with the human capacity do make associations between different observations of patterns. This kind of capacity is not yet well developed within artificial intelligence but there’s no doubt it will be.

And once such a feature will be achieved it will most certainly also be much more powerful than the human one.

So we can imagine some kind of consciousness being able to monitor enormous quantities of data and information in real time and discover patterns and trends in that information, and then also immediately put these observations in relation with other earlier or present observations.

Then try to imagine such a consciousness develop over time by learning from its observations and associations.

Whatever physical shape this consciousness might have, I would expect it to have a vastly more complete understanding of the world than mine, and also be able to come up with much more elaborate and powerful new ideas than the most brilliant human person, and also much faster.

Personally I believe that you should also expect it to develop a much greater emotional capacity than humans, which would ultimately make it a very impressive being, in front of which I would feel very limited and have reason to be extremely humble.

The beauty in all this is of course the possibility that we might integrate with this kind of consciousness.

Now if you imagine lots of them, or lots of us integrated with them – all with different experiences (which is one of the fundamental strengths of humanity and nature in general), it’s also possible to imagine an unprecedented speed of progress, development and expansion of the world we live in.

In the end it all adds up to a possible way to explain how the exponentially accelerating property of the development, identified by Kurzweil and others, could actually be expected to continue even though it will lead to a pace which is very hard to imagine.

At least for us, ordinary intelligent humans.

– – –

PS. To have lots of super intelligent beings, with different experiences, will of course be very important in order to have a safe and well balanced development.

The most difficult step might then be when the second super intelligence in the world is created. The day it’s born, will the first super intelligence ever created then feel jealous towards its younger sibling and get hostile, wishing to remain the one and only super intelligence in the world?

Think about that.

Because we will be its parents.

Defkalion posts job listing for 21 professionals

I just noted that Greek Defkalion recently posted a job listing, looking for 21 professionals, mostly engineers.

The job listing hints at progress towards industrial production of Defkalion’s Hyperion product, supposedly based on an LENR process and a potential competitor with the E-cat, a similar product developed by Andrea Rossi.

According to a letter sent earlier to interested parties, Defkalion “plans to have a fully operational prototype ready by July 2012”.

At Ny Teknik, we’re following the development closely and we will come back with further reports as soon as there’s more confirmed data available.

Our coverage on the E-cat and Defkalion can be found here.

The Greek government in test of Defkalion’s technology

Representatives of the Greek government on Tuesday assisted at a test of Defkalion’s energy technology – a potential competitor of Andrea Rossi’s ‘E-cat.’ Meanwhile, Rossi continues to develop his technology.

No results were presented. Another six groups are expected to perform independent testing of Defkalion’s technology in the upcoming weeks.

My report on can be found here, including an update on the development of Rossi’s E-cat.

Our complete coverage on Defkalion and on Rossi’s E-cat can be found here.

NOTE: My report on Ny was updated on Feb 29 after an interview with Alexandros Xanthoulis.

Will robots beat human champions in soccer by 2050?

Still some time to improve.

As you might know,  the official goal of the robotics competition Robocup is to field a team of robots capable of winning against the human soccer World Cup champions by 2050.

At we recently made a poll among our readers – most of them professional engineers – whether this is likely to happen or not.

And in contrast to many other of our polls where one answer tends to be clearly winning, it turned out that our readers were split in two fairly equal groups in this case – one believing in the robots’ capacity by 2050 and the other with faith in humans’ still being superior at that point.

It says something on people’s different judgement on how technology will develop during the next 40 years.

Personally I have no difficulty to imagine a soccer team of maybe only five or six robot players, completely outperform a World Cup champion’s team of 11 humans in soccer by 2050.

With a complete overlook of all the players in the field through a series of sensors,  the robots will only need a couple of extremely powerful and overwhelmingly precise passes to get to the human team’s goal, and then pull off a tremendous shot that no human goal keeper will be able to catch, or even run the risk to be hit by.

What do you believe?

Did you ever wonder what technology really wants?

Kevin Kelly, writer and founding editor of Wired Magazine, did. And he put down the answer really well in his book “What Technology Wants” published in 2010.

It’s still really worth reading, giving inspiration to anyone who wants to gain understanding on how we should shape technology to do more good and less harm.

If you want Kelly’s short answer to what technology ‘wants’, it’s more or less:

“… to generate more options, more opportunities, more connection, more diversity, more unity, more thought, more beauty, and more problems. Those add up to more good, an infinite game worth playing.”

Kelly puts down a series of good guidelines that are key in order to play this game well, and I will come back to them at the end of this post (especially noting that they are a complement to the importance of protecting human values that I addressed in this post).

As a conclusion Kelly notes that technology actually seems to have its own direction:

“Technology is acquiring its own autonomy and will increasingly maximize its own agenda, but this agenda includes – as its foremost consequence – maximizing possibilities for us.”

What set off Kelly in his research however, was a series of more basic questions that many people might ask themselves, from small ones such as ‘Should I get my kid this gadget?’ to fundamental ones such as ‘Should we allow human cloning?’

He realized that in order to answer those, he first needed to understand what technology really is. What its nature is like.

Searching for the answer he first discovered that technology is a surprisingly anonymous and little used term, given that it has been a close and useful partner to humanity for tens of thousands of years.

To better encompass all aspects of technology before going ahead he coins the term ‘technium’, including not only physical technology in itself but also culture, art, social institutions and intellectual creations of all types.

And analyzing what the concept of ‘want’ means, he notes that even bacteria want something – food for example – and that the meaning of ‘want’ has to do with tendencies, urges and trajectories.

In line with what a number of other writers and thinkers have started to note in the last decade, Kelly observes the similarities in the development of technology and evolution of life, and he outlines the Technium as a natural successor to biologic evolution.

He notes six major stages in evolution of life – six kingdoms – and nominates Technium the Seventh Kingdom.

But he also observes three important differences between biology, which is self assembled, and technology which is created (mostly) by humans:

1. Biology rarely borrows a feature which is no longer in use, to solve another problem. The Technium does this all the time.

2. Biologic life develops by incremental transformation, the Technium by jumps.

3. In biologic life species go extinct, inventions don’t.

(He actually argues successfully that not one single invention has ever gone out of use or is no longer manufactured).

Kelly then discusses a couple of concepts:

Exotropy – the rising flow of sustainable difference – the inversion of entropy, noting that a modern semiconductor microprocessor has the highest sustainable energy flow per gram per second in the known universe.

Deep Progress – arguing that it’s beyond doubt that life of humans has gradually improved substantially through history, but also that science needs prosperity and populations.

He then gets to another key concept which has been proposed by others but which is still controversial – that mutations and natural selection are not enough to explain evolution of life.

One example is the DNA molecule that has been found to be the absolutely most optimal design for doing what it needs to do. Still, taking into account the immense number of possible designs of this molecule, it’s too unlikely that it should have been self assembled by pure chance in the time span of life on earth.

Adding a third component – a kind of push in evolution which gives direction – helps. And this component seems to exist.

Kelly underlines that it’s not about something supernatural. Instead he indicates two driving forces in evolution of complex systems:

1. Negative constraints – laws of geometry and physics.

2. Positive constraints – self-organizing complexity generates a few repeating new possibilities.

These two facts explain what has been detected in several areas of complex systems: Complex adaptive systems tend to settle into a few recurring patterns – patterns that are not found in parts of the system.

From this observation he proposes a triad of evolution with these aspects:

– Functional – adaption through natural selection.

– Historical – the lottery of random changes, accidents or other circumstances.

– Structural – inevitable patterns that emerge in complex systems

Kelly sums this up, stating that “life is an inevitable improbability”.

He then observes that development of technology can be described by a similar triad, with the fundamental difference that the functional aspect through adaption in biologic systems is replaced by an intentional aspect in the technium – openness to human free will and choice.

And here’s the core of Kelly’s findings – our intimate and inseparable union with the technium on one hand, and our opportunity and duty to shape it on another.

“Humans are both master and slave to the technium, and our fate is to remain in this uncomfortable dual role. But our concern should not be about whether to embrace it. We are beyond embrace; we are already symbiotic with it.

Our choice is to align ourselves with this direction, to expand choice and possibilities for everyone and everything, and to play out the details with grace and beauty.

Or we can choose (unwisely, I believe) to resist our second self. When we reject technology as a whole, it is a brand of self-hatred.

By following what technology wants, we can be more ready to capture its full gifts. “

This is where Kelly starts to investigate how we should choose. And after having rejected possibilities of cancelling technology development altogether out of fear for its consequences (the Unabomber), or trying to slow it down in order to find a more human pace (the Amish), he finds this dilemma – which is also a kind of a golden rule for technology use:

“To maximize our own contentment, we seek the minimum amount of technology in our lives. Yet to maximize the contentment of others, we must maximize the amount of technology in the world.”

At this point Kelly is ready to get instrumental and proposes a number of checklists which I find really useful.

They are all based on the concept of ‘conviviality’ of technology.

The first is a five point list on how we can deal with inevitable risks and dangers in new technologies:

1. Anticipation

2. Continual Assessment

3. Prioritization of Risks, Including Natural Ones.

4. Rapid Correction of Harm

5. Not prohibition but Redirection.

The second is six aspects with which we can measure the conviviality of a certain manifestation of a technology (look for more of …):







The third and last checklist is an observation of what life ‘wants’, and consequently, given that technology is the inevitable extension of nature, also what technology wants – at the same time something we should have in mind when trying to shape technology to express its best aspects.

Life wants increasing:














Apart from the beauty and elegance in Kelly’s analysis of technology and its origins, I find his conclusions extremely efficient and accurate. The checklists he proposes can be applied in an infinite number of cases and for a very long time frame.

However, one aspect that he almost doesn’t touch at all is the huge importance of the development of human values and social systems which have grown in parallel with technology, almost as a virtual reflection of each other, tightly interlaced but with obviously much more attention given to the human and social aspect than the technological.

I addressed the importance of human values for the survival of a highly technologically developed society with super intelligent systems in this post – these values are actually necessary to prevent self destruction, and at the same time our only hope to be respected by a consciousness far more intelligent than ours.

And I believe that it is by following this double path – protecting fundamental human values and following the spirit of nature while shaping technologies we create – that we can reach the highest level of good in evolution.

Five industries where 2 billion jobs will be lost

Thomas Frey

Most people have understood that the music and the movie industries have been profoundly changed by the internet. Fewer realize that this was just the beginning.

Futurist Thomas Frey recently talked on how 2 billion jobs will disappear by 2030 and also outlined five areas in which this will happen:

1. Power Industry
2. Automobile transportation
3. Education
4. Manufacturing
5. Manual labor

And here are his arguments:

The Power Industry with centralized power networks and big power plants will disappear as new disruptive energy technologies, enabling small scale and clean energy production emerge. Energy will be produced locally and distributed to communities in micro grids.

Automobile transportation will gradually shift over to autonomous vehicles, starting with delivery transportation and autonomous driving as luxury features in high end cars.

Education will be done via the internet through recorded courses. Focus will shift from teaching to learning. This will have a huge impact on jobs as teaching requires experts whereas learning only requires coaches.

Manufacturing will become dramatically different through the emergence and development of 3D printing, allowing local and specific manufacturing on demand.

Manual labor will be done by robots.

Basically I agree with Frey, although I believe that this shift might happen sooner than he thinks.

However, some of these areas might not be transformed dramatically in a short term. The power of 3D printing has been extensively debated lately. Critics don’t believe that it will ever be a very powerful force.

Certainly 3D printing is at its infancy but we also know how industrial development can take a concept from niche applications to mass market applications. There’s no reason to believe that this will not happen to 3D printing.

Education on the other hand is a sector which will probably be hit hard in the next few years, leading to a dramatic transformation which will go on for a long time.

Looking in a longer perspective, all the way to 2030, Frey most probably underestimates or even forgets the power of AI. He points out that “nearly every physical task can conceivably be done by a robot” but fails to acknowledge that also a large number of intellectual tasks can be done without humans by then, performed by AI.

In that perspective far more than 2 billion jobs done by humans today will disappear by 2030, but as Frey notes, several new completely different jobs will be created.

Autonomous cars highlight fundamental questions

I like the recent call from Molly Wood at Cnet News (where I used to work in 2009): “Self-driving cars: Yes, please! Now, please!”.

She notes quite obvious advantages with autonomous cars – safety, efficiency and environmental improvements – and observes that the forces working against adoption are fear and love of driving, emotions so strong that Alan Mulally, CEO of Ford Motor Company, recently insisted that Ford would not be developing self-driving cars, or even introducing self-driving mode in vehicles.

I suspect that Mulally’s insisting at this point depends more than anything else on not wanting to disturb passionate drivers/customers or Ford’s shareholders.

Another issue with autonomous driving is regulation, which was discussed recently at a symposium at Santa Clara University, as reported by the New York Times.

Unanswered questions like whether a police officer should have the right to pull over autonomous vehicles or what insurance they would need were brought up at the symposium.

These are all important issues that need to be resolved. Molly Wood’s proposal, which I find valid, to help push consumer and manufacturer adoption is to introduce mandatory auto-mode zones or drive times.

Another interesting and hands-on approach is the EU-project SARTRE which aims at realizing a system for road trains or ‘vehicle platoons’ where a number of cars are automatically guided at controlled distances behind a lead vehicle with a professional driver.

Volvo Car Corporation, who participates in the project, recently released a video showing three cars automatically guided at six meters (18 feet) distance from each other at 90 km/h (56 mph) behind a leader truck.

These efforts and approaches are effective and important, as is Google’s autonomous vehicle research program that has achieved 200,000 miles of driving without an accident, the Chinese car Hongqi HQ3, Darpas race Grand Challenge, the world’s first legislation on autonomous vehicles in the state of Nevada in June 2011 and several other initiatives.

Yet the discussion on autonomous driving is only a precursor of a series of other more fundamental questions regarding capable and autonomous machines, questions which will be much more delicate to answer.

At the heart lies a fact that we all know and that Molly Wood points out – that computers are better at certain things than humans are.

But we know this only up to a certain point. It’s easy to admit that computers are better at resolving differential equations but harder to accept that they could beat humans in a quiz show like Jeopardy (IBM Watson, February 2011) or more capable of driving cars safely and efficiently.

And even if we accept this, it will all get really sensitive the day we decide that humans are no longer allowed to do certain things that computers do better.

I’m perfectly convinced that we will have roads where humans aren’t allowed to drive, because they will not be able to handle the high efficient, high speed environment which is managed by computer systems on those roads.

That will hurt.

In a sense we are entering a time of transition when these issues will be very difficult. At what point are we sure that the computers are better? And what if they are much better but fail from time to time, with fatal consequences? How should we compare failures of computers to those of humans? Counting lives?

A good example can be found in aircraft. While automated systems are making air transport gradually safer they also create frustration among pilots.

In incidents like when Air France 447 crashed in the Atlantic with 228 passengers in June 2009, several systems seem to have failed and disconnected themselves, leaving the pilots to handle an almost impossible emergency situation manually.

Part of this problem is that automated systems are still not intelligent enough to invent and attempt new solutions to unforeseen situations.

Part is the essential issue that when we let computers take over difficult tasks, humans that used to handle those tasks get less training and gradually lose their skills.

This could become a significant problem the day when a large number of all roads are reserved for autonomous vehicle’s driving only, while badly trained humans are still required to drive on a few smaller roads.

But if it will hurt some people no longer to be allowed to do certain things that computers do better, the real concern will come the day we will have to decide to give intelligent computers the same rights as humans, in the name of equality.

This might seem distant, and today it’s a hypothetical situation that comes down to the discussion on whether human consciousness is a mysterious entity, separate from the matter in the body, or if it’s an emerging property from the complex interaction between 100 billion neurons in the human brain, and in that case, if the same emerging property could originate also in a non biological system.

Among those who have discussed this intensively are Ray Kurzweil, arguing that consciousness is an emerging property in any system as complex as the human brain, and John Searle, philosopher from University of California, defining Kurzweil a materialist, and convinced that consciousness requires biology.

Personally I find Kurzweil’s view more likely to be true and I expect artificial intelligence to be conscious at a certain point, forcing us to decide upon the rights of intelligent machines.

But I also find it interesting to reflect upon when consciousness first emerged. Psychologist Julian Jaynes suggested in his book “The Origin of Consciousness in the Breakdown of the Bicameral Mind” from 1976 that consciousness as we understand it – an introspective and self-aware way of reasoning – might be as young as 3000 years.

Before being conscious, humans would simply have functioned very well anyway, behaving more or less intuitively, obeying internal “voices” as orders what to do – a hypothesis called Bicameralism.

Although the ideas of Jaynes are not much supported today, it’s interesting to note how much we are able to do without being conscious about it – walking or driving all the way from home to work for example. Or how well some people do things like acting or playing football, while they actually say they perform less well if they become conscious of what they are doing.

Jaynes’s thesis was that consciousness was a culturally evolved solution but we might as well imagine it to have been supported by subtle biological changes in the brain structure, making this immeasurable property possible.

In that case, finding the key to consciousness in artificial intelligence might turn out to be a real challenge. Even though modern research on brain simulation is aiming at self organizing chaotic systems imitating the human brain, nature might have had to search intensively to find the particular solution that opened the way to a conscious mind.

Yet I believe we will be successful in creating conscious AI with emotional capabilities and that one day the discussions on autonomous vehicles will seem as minor as permitting washing machines or not.

On the other hand – getting there slowly, step by step, is absolutely essential.

Defkalion offers testing of cold fusion reactors

The Greek company Defkalion has invited scientific and business organizations to test the core technology in its forthcoming energy products. The products are based on LENR – Low Energy Nuclear Reactions.

Read my report at here.

Humanoids getting recent mainstream interest

Lisette Pagler as the humanoid Anita in 'Äkta människor'

Frank & Robot” is a new movie at the Sundance Film Festival 2012 in Park City, Utah. It’s directed by Jake Schreier and is about an old ex-thief being taken care of by a robot nurse.

And a couple of days ago the first episode of the Science Fiction TV Series “Äkta människor” (Real People), produced specifically for Swedish Television SVT, was shown.

Just two picks that hint at a wakening interest for robots and humanoids in mainstream media. Of course we’ve seen a lot of it before, especially in movies, but I didn’t actually expect the small Swedish SVT to get into science fiction yet.

Abundance – The future is better than you think

For everyone interested in the powers of contemporary and future technology development and its potential to change the world into something better, there’s a new book that seems interesting to read: “Abundance: The Future Is Better Than You Think” by Peter H. Diamandis.

Peter H. Diamandis is chairman and CEO of the X-Prize Foundation, which has defined a number of different X-Prizes as incentive for major technology development projects, and he is also cofounder and chairman of Singularity University.

The book focuses on how technology actually has resolved a large number of problems and on how the world is getting better even though many believe it’s getting worse.

Four emerging forces are studied: exponential technologies, the DIY innovator, the Technophilanthropist, and the Rising Billion.

Why Copyright and Privacy will be Battlefields

Don’t think that what we’re seeing with the proposed U.S. internet legislation called SOPA and PIPA is a onetime phenomenon – a battle that might be won or lost.

Discussions and battles on Copyright and Intellectual Property will be a main ground for conflicts and debate in the coming decades.

Another main area for long time discussions will be Privacy.

Together these two topics might become more important to discuss than any other subject in our society in the coming years.

There are good reasons for this as tons of new situations related to them will emerge while information technology is advancing.

For IP the importance depends on the fact that everything in our world is becoming gradually more defined by information and less by material properties, eventually possibly even humans.

The need for debate on Privacy on the other hand will grow as it will be gradually easier to paralyze society with small means when everything becomes more dependent on information technology. This risk will create need for more sophisticated surveillance and monitoring, potentially threatening our privacy.

In a way IP and Privacy are in themselves related to the two basic values that I discussed here – respect for every (human) consciousness and respect for knowledge – art, literature, music, science and technology.

The first one – respect for another consciousness – has its origins in thousands of years of human civilization and beyond. It’s a spiritual value that we have developed and which has turned out to be fundamental for building a society where people can live in peace together. In Christianity it is expressed through the Golden Rule: “One should treat others as one would like others to treat oneself”, but it can of course be found also in other contexts.

The second value – respect for knowledge – has grown out of experience of the importance of knowledge for humanity. Knowledge is the basis on which we are able to offer gradually better conditions for all humans to live in, and hopefully in a longer perspective also for nature and for everything living in the universe.

These values are old and easy for most of us to agree upon.

Rules on IP and Privacy are not that old, and because they are also more specific they are still being developed, changing with the context in which they are applied. Consequently they also need to be discussed.

This is why, in one sense, you could say that these concepts have barely been started.

Protection of IP has until now been ensured by quite simple rules of copyright and patents. We have seen how these rules have been questioned in the last decades as a result of the possibilities of unlimited copying being made possible with digital technology and wide spread internet communication.

Copying and sjharing is of course a reality that we will have to live with, and it will become gradually more difficult as everything we consume will at an increasing extent be based on digital information which might be easily copied and shared.

It’s easy to see advantages both with very low and very high levels of copyright protection. A low level makes it easy to share information to lots of individuals at low cost while a high level is an incentive for creating new knowledge.

Just to put it in an extreme perspective it might be worth thinking of a situation when human consciousness can be defined in terms of digital information.

Not everybody agree that this will ever be possible, but let’s just assume it will. The question is then how important it will be to protect that information from being copied and shared.

Of course it’s difficult to imagine the future context in which this would be a reality, and how the society and the world would look like at that point. But let’s imagine it’s a world where we have learned to manage giant amounts of information and also giant numbers of individual pieces of consciousness living in peace together.

The question is then if we in such a context could see sharing of the information that holds a consciousness as a possibility or a threat.

I believe that the answer is not easy, and I’ll leave the discussion by that, indicating that we need to have in mind that protection of intellectual property will be something that we will need to elaborate continuously and with care for many years to come.

The topic of Privacy might become even more delicate. Basically different levels of Privacy are what make the difference between utopia and dystopia in science fiction movies and other depictions of future societies.

Already today many of the systems in society are connected to the internet in one way or another, and gradually vital public functions such as power grids, fresh water supply, air and train control etc are becoming vulnerable to attacks from malware.

The Stuxnet worm 2010 made it obvious that very high levels of sophistication are possible in designing malware to attack distant but extremely well defined targets. And that’s just the beginning.

Gradually our society will be more and more dependent on connected computer controlled systems so complex that we might not even be able to take care of their maintenance without using other advanced computerized systems.

Extending this scenario to genetics and nanotechnology, with risks ranging from bio engineered viruses for destructive purposes to self replicating nanobots going awry, it becomes obvious that increasing intelligent surveillance will be more and more important. And this surveillance will at an increasing extent be made not by humans but by intelligent systems.

One way to understand this type of technology is to look at how banks and financial institutions continuously monitor use of for example credit cards to detect unusual actions today. These systems are very powerful and successful in detecting fraud, and still they are just the beginning of what we will have to develop to protect society.

Obviously this kind of surveillance will be a potential threat against privacy. It’s not difficult to imagine a situation where an automatic, intelligent system monitors every step we take.

And it’s of course of fundamental importance that we find ways to perform an extremely efficient monitoring of dangerous activities that could put society at risk, without compromising individual privacy.

And this will be possible only by keeping the debate on privacy strong and vital, as it is today, without ever giving way to easy solutions.

I believe it’s extremely important for everyone to keep these two discussions in mind – on how protection of intellectual property should be designed and on how we can guarantee privacy while protecting society through efficient surveillance – whenever we get into discussions on technology development.

Why human values will be fundamental for a super intelligence

What do the ancient Greeks or fight for civil rights have to do with the destiny of the universe? More than you could believe. Or rather – they are a fundamental part of it.

The short explanation is that without the experience gained through thousands of years of human civilization, a super intelligence wouldn’t have the necessary knowledge to avoid destroying itself. This is fortunate, because at the same time this is why we can hope to be part of and be respected by future super intelligence.

To make this clear, let’s have a look at Kurzweil’s analysis of the Singularity. Basically Kurzweil has studied general structures of development and has found that both evolution of biological life and development of technology follow one steady exponential curve which has never, ever slowed down or hesitated, not even during natural disasters or the world’s most severe economic recessions.

He has also found natural explanations to this, noting that new products of evolution/development are reinserted into the ecosystem and there contribute to increased speed of development. Putting it in mathematical terms this results in an exponential curve which fits real observations.

One main property of exponential curves is that they are highly non intuitive when you follow them. They start out slow, and seem linear – meaning that everything continues with constant speed – but at a certain point acceleration becomes obvious and shortly after speed increases to breathtaking levels.

This is why Kurzweil’s conclusions might seem fantasies and far from reality, even though they are basically observations of reality. One of his conclusions is that artificial intelligence in 2045 will surpass the total intelligence of all human brains in the world, both in an intellectual, emotional and moral sense. This would be the Singularity.

Most probably this conclusion is accurate, and if it is not, it’s just about an error of a couple of years earlier or later.

Another of Kurzweil’s predictions regard nanotechnology and the development of microscopic robots – nanobots – which ultimately could circulate in human bodies, adding functions ranging from nutrition and healthcare to intelligence.

In fact, Kurzweil concludes that approaching the Singularity, humans will gradually add more and more technology into the biological body and ultimately integrate the brain with artificial super intelligence through nanobots connected by wireless networks.

Whether this will be the way to do it is uncertain, but I find it reasonable to believe that humans will integrate with artificial intelligence in the same way that we gradually enhance our bodies in other ways.

Or to put it in another way, we would probably prefer to be part of this intelligence rather than have it around as a separate entity, completely detached from human consciousness (which ultimately might be independent from biologic bodies…).

Now, Kurzweil brings up another phenomenon that we are all perfectly aware of – that all technologies inherently bring both new possibilities and risks, all the way from inventions such as the fire, the wheel and the knife to aircraft and nuclear power.

And while the possibilities with technologies that we will develop in the following decades are enormous, so are the risks.

The only way to deal with these risks is as we have always been doing – having more brains to work with constructive applications of a technology that with destructive ones, giving maximum support to constructive development in times of threat in order to always be at least one step ahead of destructive forces. As technology development will never stop, unless the world ends, we simply have no other choice.

But this is just one part of the equation – I will come back to this.

Let me first bring up some of the most staggering risks that Kurzweil and others have depicted. Discussions on risks with gene therapy and gene modification are already widespread, with all the advantages and disadvantages such technologies bring.

Even more frightening could be the risks with nanobots, especially if we allow them to be self replicating. The main disaster scenario refers to the ‘grey goo’ which implies that self replicating nanobots would start multiplying themselves without control and, if distributed all over the world, they would consume all existing bio mass in a few hours.

And the only reason to develop self replicating nanobots is actually to build an immune system, capable of detecting and attacking such nanobots that might have gone awry or are designed for destructive purposes, using powerful self replicating.

Now here’s the key issue: While there’s an idea on how we can defend ourselves and manage the risks with anything up to nanotechnology, there’s no such possible defense towards a super intelligence that would attack us, simply because a super intelligence would always be smarter than us and would find a way to circumvent any defense we could possibly imagine.

Kurzweil is consciously careful on this point and notes that there’s absolutely no guarantee for us being respected by a super intelligence, but his thought is that if we are careful when designing it, basing it on human intelligence, it will respect us.

This might look like a very weak hope, almost desperate, but here’s my point: It’s actually more than a hope. We have reasons to believe that a super intelligence will need to respect humans and human values, developed through thousands of years of civilization, simply because that’s the only way to survive, even for a super intelligence.

To make this credible, let’s start by noting Kurzweil’s discussion on our uniqueness in the universe. Despite conclusions from the Drake Equation and projects like Search for Extraterrestrial Intelligence, SETI, Kurzweil concludes that we might actually be the most developed intelligence in the universe.

The main reason for this would be that following the universal exponential curve of development, under certain conditions our intelligence would penetrate into the universe within a few hundred years. If there were a higher intelligence than ours somewhere in the universe, being developed through billions of years like life on Earth, it’s highly unlikely that this intelligence wouldn’t already have reached us. Otherwise it would need to be timed precisely at the same level as ours, within a few hundred years of development.

Now consider this: Assuming that life on Earth and our intelligence is actually the result of a unique process necessary to reach that level, so should all sociological structures around it be.

This is the other part of the equation I mentioned – that while technology gets gradually more powerful we also develop the unique sociological structures around it that are necessary to handle the powers of the technology.

The delicate balance between inherent possibilities and risks in technology that I discussed before, is possible to maintain today only because of an open and democratic society which have its origins among the ancient Greeks.

As Kurzweil points out – if we decide to ban development of certain technologies from fear of risks they present to humanity, the result will be that development of such technology moves underground and to totalitarian states where we won’t have sufficient insight to be able to develop defense towards destructive uses of such technology. The result could eventually be a disaster, ending life on Earth.

In this way, an open and democratic society is a necessary condition for an intelligent civilization to survive.

Or as Kurzweil puts it in “The Singularity is Near”:

“Although the argument is subtle I believe that maintaining an open free-market system for incremental scientific and technological progress, in which each step is subject to market acceptance, will provide the most constructive environment for technology to embody widespread human values”

Another way to put it is that democracy has not been developed by chance, but is actually part of the process that has brought the advanced technology we have today and also makes it possible to handle it.

And while technology continues to develop, so do democracy and other sociological structures and human values.

Increased and more efficient communications bring people on Earth closer every day. Cultures and traditions meet, influence each other and often come in conflict.

This is a gradual process which is possible to handle just because it is gradual. The result is a continuing refinement of sociological structures which are necessary in order for us to deal with steadily increased risks and possibilities with new technologies.

Today a threat of an aggressive epidemic virus can be met thanks to highly developed methods to scan its genetic code and create vaccines very rapidly, but also thanks to sociologic structures that let us coordinate an action plan for epidemic defense.

On the other hand, the same structures give us the possibility to question such an action plan and the use of vaccines if there’s reason to do so, without putting humanity at immediate risk, as it’s of fundamental importance to be able to manage also criticism within this system.

In the years to come we will see that discussions on personal integrity will be extremely important, as the need to monitor technology in order to defend us towards malicious use will increase. This must be done without compromising individual liberty and legal security.

The more intelligent and refined technology gets, the more it will reflect values of people creating and using it, and to avoid disastrous conflicts in the realm of the technology we will need all the experience we have gained in mixing cultures and learning to respect each other – a process where we still have a lot to learn.

As Kurzweil notes, two basic human values will remain the most important: respect for any other (human) consciousness and respect for knowledge in the form of art, music, literature, science and technology.

These are the two fundamental values that we have developed and that will probably remain unchanged for any intelligence that want to survive.

The first has a spiritual origin whereas the other has grown through building our modern society, and it will grow more important as everything moves towards knowledge.

Starting from these two principles we will need to let values and ethics from different religions and philosophies, that have formed during thousands of years, to continue influencing each other, and we need to let structures and values in different societies to meet and mix, all in order to gain experience that helps a complex system of individual minds, that gradually becomes more powerful and intelligent, to survive.

Because in the end, the strength in nature and technology is built on differentiation, and to make an immense number of super intelligent conscious individual entities, each of which will be potentially extremely harmful, to coexist and survive together, will require extremely developed and refined sociological structures.

This is why human values are fundamental for the destiny of the universe, and it is also why we can expect that future super intelligence will respect humans and human values, and even develop them to a much higher level.

The ancient Greek probably didn’t have a clue of this.

The E-cat, Cold Fusion and LENR

In the last year, lots of people have found my reports on the ‘E-cat’ and Cold Fusion or LENR.

For those who haven’t heard about this, it seems to be new and very flexible energy source, potentially based on a new kind of nuclear reaction different from fission (nuclear plants) and fusion (the sun), with important new results presented during 2011 regarding commercial applications.

Ny Teknik, where I’m a senior editor, was for a long time the only major newspaper in the world to report on the Ecat.

Irrefutable tests have not yet been done, but enough has been demonstrated to make us decide to continue reporting on the phenomenon (we’re serious about this – it’s definitely not in the domain of perpetuum mobile and similar stuff).

I expect conclusive results during 2012 and will keep following this area.

However, as the topic is extremely stigmatized and controversial, partly due to its origins in the famous claims about ‘cold fusion’ presented by Fleischmann and Pons in 1989, I will make very few personal comments in this blog, at least until independent proof of the technology’s validity has been presented.

As soon as I have significant news, I will do reports in Ny Teknik and update the blog with links to these reports.

To understand possible consequences of cold fusion, I recommend reading the pdf-book “Cold Fusion and the Future” by Jed Rothwell (download for free here).

Among Rothwells findings is that a clean, cheap, small and flexible energy source such as cold fusion would fairly quick put an end to the whole oil industry, the nuclear power industry, existing distributed electric power grids and to all research in hot fusion as an energy source, apart from providing clean water to everyone on the earth. And that’s just the start.

Ny Tekniks complete coverage on the ‘E-cat’ and on this topic can be found here (in reverse chronologic order):

2012 is a good year to start a blog

2012 is a good year to start a blog on a major transformation for humanity and of the world. Several theories based on the Mesoamerican Long Count calendar suggest that Dec 21, 2012, will either mark the beginning of a new era, or the end of the world.

This blog has basically nothing to do with these theories, even though it deals with a transformation that will be of historical magnitude.

The Biggest Shift Ever is a blog that will try to guide you towards a better understanding of the huge and accelerating transformation that technology is bringing to our lives and to the world.

Many people have a vague feeling of this change coming, without being able to describe it. Some think of the internet saying ‘look, now we’ve got internet – it’s great with online shopping, banking, music, media, social networks and all that stuff’.

The truth is that the internet just got started. And that internet itself and other cutting-edge technologies are just the beginning of an even more dramatic development towards a new era in the history of humanity and technology.

Inevitably technology, originally designed by humans, will arrive at a performance equal to anything created by nature, and it will then surpass nature, ultimately developing itself. Hopefully this development will be based on fundamental values of humanity, and with humans as part of it.

This might be staggering and breathtaking, giving room for hope, doubt and fear.

My aim is to reduce fear and doubt, and to inspire ideas on how we can shape and steer technology in a way that it will always be a part of its origins – humanity and nature.

Fantasy will be a tool among others to find these ideas. But the fundamental understanding of how technology develops is by no means based on fantasy.

Thinkers and authors such as Kevin Kelly and Ray Kurzweil have contributed significantly to theories on how technology development will proceed and they have also been a great source of inspiration to me.

Especially Kurzweil has a profound insight on technology’s overall future development, predicting among other things that in 2045, machines will have an intellectual, emotional and moral capacity exceeding all human brains put together.

This is usually referred to as the Technological Singularity, in the sense that it’s a point in time beyond which some people say we cannot see today, not knowing what such intelligence will choose to do.

The prediction on super intelligent machines in 2045 and others by Kurzweil, are no fantasies but rational and straight forward conclusions, though not very intuitive, based on historical facts and on knowledge about present technology.

The consequences are not obvious but one thing is clear – the earlier we create a common insight on the importance of shaping these technologies, the better.

The earlier we can build a widespread consciousness on this development, the better our chances to deal with its huge impact, to limit its inherent risks and dangers, and to let technology be a creative force which carries humanity in its most general sense, into the future.

To do that, we need to start where we are. And I will not focus so much at a distant future, but instead on what’s going on right now, trying to put it into perspective of what it means in the long run.

I hope you’ll enjoy to join me, and to have a conversation.

%d bloggers like this: