A conference was held at BME’s Faculty of Electrical Engineering and Informatics (BME VIK) on the much-awaited and recently adopted proposal for the AI Act of the European Union.
On June 14, 2023, the European Parliament accepted the directives on regulating systems using artificial intelligence (AI). These human-centred directives, supporting ethical development will form the basis of the world's first AI regulation, the AI Act. At the conference entitled “Human-centred Regulation of AI” the AI Act and a world-leading MSc programme focusing on ethical AI at BME’s Faculty of Electrical Engineering and Informatics (BME VIK) were scrutinized. More than 200 researchers, market players, employees of public authorities and state administration with an interest in AI took part.
“A long professional and social debate from June 2018 preceded the proposal describing the EU's position. The importance of the topic is enormous: this will be the first such comprehensive regulation in the world, which, like the previous General Data Protection Regulation (GDPR), is expected to guide the regulation of AI for other parts of the world as well” – said Péter Antal, associate professor at the Department of Measurement and Information Systems, who summarized the aim of the event on behalf of the organizers, László Jereb, professor emeritus at the Department of Network Systems and Services, and Péter Hanák, senior advisor at the Tender and Project Group, Faculty Administration Service Center at BME VIK. The BME conference participants were able to learn about the goals, structure and interpretation of the future regulation, the information sources currently available on the subject, the educational elements and the prescribed practical tasks.
“The extensive professional and social consultation on the subject will soon end and the act will enter into force. A regulation affecting both the corporate and private sectors is being prepared the main points of which should be understood as soon as possible, and everyone should be aware that there will be obligations when it enters into force”, lecturers and researchers of BME stressed the importance of the topic. “To guarantee human rights, the future act will prohibit certain AI solutions, such as its use in real-time identification in public spaces or social scoring of citizens. In other, high-risk cases, AI devices must comply with prescribed regulations, which will also have to be verified by notified bodies, similarly to the Medical Devices Regulation (MDR).”
The professional event, in the presence of BME’s Chancellor Miklós Verseghi-Nagy, was opened by Charaf Hassan, Dean of BME VIK, and Balázs Hankó, State Secretary of the Ministry of Culture and Innovation responsible for higher education, innovation and vocational training and adult education. Next, János Levendovszky, BME’s Vice-Rector for Science and Innovation, full professor at the Department of Networked Systems and Services of BME VIK, an internationally recognised expert in the field of artificial intelligence introduced the conference topic.
Dimitris Tzanidakis gave the first lecture on the European development of the AI Act, in which he was actively involved as an advisor to the European Parliament. Later, the representative of an Italian partner law firm, Gabriele Franco shared the details of the subject's relationship with GDPR. Dóra Petrányi highlighted the connections of the AI Act to human rights, while Tarry Singh presented the expected impact of AI on productivity. László Bódis, Deputy State Secretary responsible for innovation at the Ministry of Culture and Innovation, presented Hungary’s recently accepted innovation policy from the viewpoint of AI. Antal Kuthy, Szilárd Németh and Zsolt Török, representing various partner companies, talked about their corporate AI solutions created in the fields of healthcare, autonomous vehicles and software development, influenced by the AI regulation and in sync with it, supporting the observation that good regulation is creative in nature: it provides trust for users and predictable security for developers. Their presentations also mentioned that with the legal regulation of AI, liability dilemmas and reliability issues are expected to be clarified. All of this can have a value- and market-creating effect on companies, so the new act can also result in portfolio expansion, increased efficiency and the creation of new jobs.
Speakers at the conference emphasized that an integral part of the full development of a regulation is the examination of compliance with the act. Due to the technological complexity, automated auditing solutions are essential in addition to manual auditing, but they are still to be developed.
The opinion of the conference audience was divided as to how useful and effective the new law will be. According to the more pessimistic participants, regulation and practice that prioritizes business interests may be more effective, as it is feared that otherwise, less strictly regulated regions may gain a competitive advantage. However, everyone agreed that people's willingness to share data requires safe conditions and a guarantee, which can only be achieved through comprehensive regulation based on human ideals and freedoms. According to the optimistic half of the audience, Europe can only benefit from being the first in the world to regulate the development and application of AI, as it may be expected that other parts of the world will be forced to follow the practice in Europe, which has a population of 500 million.
Part of the conference programme was a round table discussion moderated by Mihály Héder (Head of Department, habilitated Associate Professor, Department of Philosophy and History of Science, BME Faculty of Economic and Social Sciences [BME GTK]), at which experts discussed questions such as the fundamental dilemma of AI regulation, or the “creative” regulation, i.e., how to increase users' confidence in the intensive use of AI, and how to give security to developers by clarifying the expected requirements without restricting innovation opportunities by regulation.
The Human-Centred Artificial Intelligence Masters (HCAIM) initiative supported by the European Union was also presented at the event. The HCAIM project was launched in 2021 with the participation of four European universities (BME, Utrecht University of Applied Sciences, Dublin University of Technology and Federico II University of Naples), three centres of excellence and three small and medium sized businesses. The project also draws on the teaching and research expertise of more than 25 other leading ICT organizations supporting the programme.
As part of the June conference programme, B. Feeney presented the “Human-Centred AI” MSc training programme, developed by the consortium, while its legal, ethical and technical aspects were explained by Kitti Mezei (assistant professor, Department of Business Law, BME GTK), Mihály Héder and Péter Antal. The training programme was launched as a new programme in the Netherlands, Ireland and Italy in the fall of 2022, and in Hungary – within the currently running 120-credit Computer Engineering master training programme of BME VIK, in accordance with the domestic accreditation – it takes place as an additional programme.
Students completing the programme, which is human-centred and takes legal, ethical, sociological and other social science aspects into account, will gain knowledge that will enable them to create AI innovations that respect and support the protection of human rights while exploiting the potential inherent in AI for the benefit of today's digital society.
The training plan highlights the privileged role of human values and rights in the development of AI applications and at the same time, helps to consider and evaluate various aspects and risks in the entire life cycle of AI developments. “The use of AI must be legal and ethical, respecting human rights. Graduates building AI systems must acquire the right mix of technological skills and ethical and legal knowledge to meet industrial needs. Our goal is to help students' analytical, design, development and creative skills to enable them to incorporate human-centric AI solutions into various systems and applications, creating a balance in the use of different technologies and scientific results,” – emphasized BME’s specialists about the 60 ECTS credit value master training programme. They revealed that they are currently working on the extension of the MSc training programme, so that interested people with a non-engineering background can also take relevant modules, even within the framework of adult training.