Blog

AI&Culture

A while ago I was invited to a conference on Artificial Inteligence and education. Being completely honest, I kind of questioned the relevance of my presence to such event, however, I took my AI for Dummies dictionary and went to the conference.  To my surprise, instead of discussing the design and implementation of the AI solutions of the future, the focus was on ethical issues and human-centric use of AI.

Some of the best experts in Europe were gathered together to figure out how to make AI work in people’s favour. They joined effort to take a step further in finding human-centric approach and setting ethical guidelines.

It turned out that AI is not only technology, AI is also culture- fast, creative, and demanding culture. Like any culture, it creates its artefacts. Probably faster than any other before. The problem is that the rest of the components which any culture consists of- shared values, attitudes, normative patterns are still missing; ethical values had been neglected so far which means that this highly invasive, fast developing culture still does not have its healthy boundaries – the regulatory ethical and legal framework.

The good news is that Europe is all ‘hands on’ this gap now. The need to put values and norms in place has been leading to the European Strategy for AI. For the last year the European Commission has been working on AI Ethics Guidelines and legal frames. For example, one key principle for AI made in Europe will be “ethics by design”.

To support this complicated process, the EU has supported projects which are trying to find the answers to the questions we all should be asking now. Like the SHERPA project, for example. ‘ The Sherpa project is developing a number of future scenarios on the use of AI and Big Data analytics in various domains to check our assumptions about the future social conditions that might drive the use of AI and the ethical implications we might have to face as a result.’ Reading the scenarious and thinking about possible solutions feels like 2128 whereas they are actually discussing potential events in the year 2025 which is only 6 years from now.  

Which brings us to the simple truth that AI will ‘happen’ to us, whether we like it or not and whether we understand it or not. Just like electricity ‘happened’ to people when it was brought into people’s lives. Back then there were people who were trying to fight the change back due to fear what it can do or because they lost their jobs, or because they had to learn new skills, etc. The same thing is/ will be happening now as well, the same psychological mechanisms are kicking in. Trying to resist change is normal. It is a natural human behavior to be skeptical to, even afraid of change, especially when you don’t know what it will bring.

So, to gain trust and to make people and societies accept and use AI will be one of the biggest challenges. This means that AI should look and feel familiar and trustworthy, the technology should be predictable, responsible, verifiable, respect fundamental rights and follow ethical rules.

My husband says I’m artificially unintelligent, meaning (I hope) that I don’t always know how technology works. The point is that I don’t want to have to know how exactly everything works. I want to be able to use technology knowing that I am protected by ‘ethics by design’ principle and that it applies to the products and the solutions by default. I would gather I am not the only one. Otherwise, the use of AI may lead to undesirable outcomes, the least to say, such as creating an echo chamber where people only receive information which corresponds to their opinions, or reinforcing discrimination, as in the case where an algorithm turned racist within 24 hours due to exposure to racist material . The big question here is how all this will happen, of course. At the moment a group of 52 members of a high-level expert group is working on these issues.

AI will bring rapid technological changes which means that our world will be significantly transformed, sooner rather than later. Technological changes will modify the way we work, the way we learn, the way we travel, it will change medicine and war. It inevitably will change the way we think and the way we behave also.

The changes are happening already; the European Commission reports that almost all Member States are facing shortages of Information and communications technology professionals, and there are currently between 800 000 to 1 000 000 vacancies for digital experts in Europe. And yet, there is significant unemployment rate in some countries which shows that a number of changes should be put into place. Such as the skills required of workers, for example, meaning that potentially very large numbers of workers will need to upskill.  Poor general technical knowledge in the broader population hampers the accessibility and uptake of AI-based solutions which creates new type of minorities in our societies- digital and tech minorities.

Among the solutions suggested by the European Commission is that access to the necessary skills should be fostered in primary and secondary schools, although training of teachers remains an important challenge. Fast track retraining programmes need to be designed in order to enable the population to gain experience in AI. Technology like Massive Open Online Courses (MOOCs) could be used to scale up learning. The topic of AI needs to become part of non-technical study-programmes through formal and informal education, so as to provide the future workforce with knowledge needed to operate and navigate in a working environment where AI will be part of the day-today tasks.

An article in the Guardian suggests that ‘in the emerging technologies of the fourth Industrial Revolution, such as artificial intelligence, Europe is nowhere’. That ‘nowhere’ in figures, as I find out from the factsheet on Artificial Intelligence in Europe, is “Europe is behind in private investments in AI: €2.4-3.2 billion in 2016, compared to €6.5-9.7 billion in Asia and €12.1-18.6 billion in North America.”

Europe realizes that without major efforts, the EU risks losing out on the opportunities offered by AI, facing a brain-drain and being a consumer of solutions developed elsewhere. It is a global talent war. The The Commission proposed in a coordinated plan under the next programming period 2021-2027, that the Union invests in AI at least EUR 1 billion per year from Horizon Europe and the Digital Europe programmes. ‘

The major efforts, however, are in developing ethics guidelines with a global perspective and ensuring an innovation friendly legal framework. The European Commission is in the process of assessing whether the national and EU safety and liability frameworks can face these new challenges or whether any gaps should be addressed. Each country is writing their AI guidelines- guidelines where each society will react and will reflect their culture and vision on the matter.

 The ambition is then to bring Europe’s ethical approach to the global stage. The Commission is opening up cooperation to all non-EU countries that are willing to share the same values.