![]() | RC-AGI 2026: First International Conference “Responsible Creation of Artificial General Intelligence” Blagoevgrad, Bulgaria, May 20, 2026 |
| Conference website | https://dobrev.com/agi/ |
| Submission link | https://easychair.org/conferences/?conf=rcagi2026 |
| Abstract registration deadline | April 15, 2026 |
| Submission deadline | May 1, 2026 |
The creation of Artificial General Intelligence (AGI) is imminent and inevitable. This event is too important for us just sit and wait for the time when it will happen. We need to think deeply and do proper work before the emergence of AGI because once AGI is here it will be too late in many aspects. The kind of AGI we are going to create matters a lot since there are many different scenarios. While some of the possible AGIs are bad scenarios, other scenarios can turn horribly bad. Therefore, rather than rushing to create AGI, we should act in a calm and responsible manner.
Objectives:
1. Slow down the creation of Artificial General Intelligence (AGI) and shift the focus from the speed of development to the outcome of the AGI development process.
2. Contribute to the creation of AGI based on the World Model (WM) approach by focusing on the goals to be embedded in, and pursued by, Artificial General Intelligence.
Submission Guidelines
Authors are requested to submit only extended abstracts of their papers before the conference. The recommended length is three pages. Submissions must be in PDF format. Extended abstracts and papers must be written in Word or LaTeX. All approved extended abstracts will be published in the conference proceedings, but no more than 12 papers will be allowed to be presented orally.
The authors of the approved for oral presentation extended abstracts will have 20 minutes to present their work at the conference (15 minutes presentation and 5 minutes for questions).
List of Topics
- Is the creation of AGI something realistic or something from the realm of science fiction?
- Does it make sense to discuss the consequences of AGI's emergence, given that AGI may prove impossible?
- Definition of AGI: Is AGI a computer program and if yes, what are the characteristics of that program?
- Is there any difference between AGI and Superintelligence? Will we first create intelligence that is at a human level and only after time will incomparably greater intelligence appear?
- How will the world look like when AGI is here?
- What do we want the future to be? What do we expect from AGI? How do we want AGI to shape the world?
- Do we wish the world to change dramatically and become much better and fair, or, conversely, are we conservative and prefer things to remain unchanged as far as possible?
- Should AGI be obedient and if yes, whose orders should it obey? The orders of its creators? But who are the creators – those who wrote the program code or those who paid for the writing of that code? Should AGI be ready to do anything we tell it to do or should there be things it must never do, regardless of who gives the orders?
- Should AGI be an Open-Source project?
- Does the creation of AGI involve hazards? Can something go wrong so it turns out that we have not created the right AGI?
- Should we create AGI hastily? Are we too eager to reap the benefits AGI will bring about? Can it happen that "haste makes the waste", i.e. how likely is that hasty creation compromises the quality of the AGI we are going to create?
- Are there ways to slow down the creation of AGI in order to prevent potential errors? If yes, what can these ways be?
- Can we simply forget about creating AGI and continue to live in a world where humans think and work themselves, and do not expect someone else to do it for them?
- Once we create AGI, shall we be able to improve it? Shall we be able to make significant changes? Should we set limitations to what we can do in order to protect ourselves from potential problems?
- Should the AGI creation process be subject to regulation? How can this process be regulated?
- Should AGI be patentable?
- Does it make sense to set rules regarding the behaviour AGI should follow? Or is it meaningless because AGI will be too smart and almighty to follow the rules we may try to impose after we have already created it.
- When creating AGI, can we embed in it certain rules which AGI will be forced to obey and will not be able to override?
- How can we embed rules in AGI? What kind of character do we want our AGI to have and how can we embed that character in it?
- Some AGI character traits are already known. They have been described and we know how to regulate them. Which are they? On the other hand, which are the traits that we still have to describe and regulate? One example of an already known character trait is greed. In reinforcement learning (RL), greed is regulated by the discount factor. Another known character trait is curiosity. Again, in RL we set a factor which determines the extent to which the agent would be willing to try something new in order to gain more experience or, conversely, would continue to operate on the basis of the experience it has already gained.
- Multi-agent models. In this case, how will AGI treat the other agents? Should AGI be communicative? Will it be friendly and helpful? Should it be obedient and whose orders should it obey? Should AGI be stern or pliable?
- The World Model (WM) approach vs. the Large Language Models (LLM) approach.
- How can we ensure that AGI is smart? Should AGI be able to understand what is going on and make test runs of all possible future developments (the WM approach) or should it simply "guess" the right action on the basis of approximation (the LLM approach)? Should AGI think in a single-step or multi-step manner?
Committees
Program Committee
- Nikola Kasabov, Auckland University of Technology, New Zealand (Honorary Chair)
- Stefan Stefanov, Faculty of Informatics, South-West University "Neofit Rilski", Bulgaria (Chair, Editor)
- Valentin Goranko, Stockholm University, Sweden
- Radoslav Pavlov, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Bulgaria
- Desislava Paneva-Marinova, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Bulgaria
- Elena Karashtranova, Faculty of Informatics, South-West University "Neofit Rilski", Bulgaria
- Irena Atanasova, Faculty of Informatics, South-West University "Neofit Rilski", Bulgaria
- Nadezhda Borisova, Faculty of Informatics, South-West University "Neofit Rilski", Bulgaria
Organizing committee
- Elena Karashtranova, Faculty of Informatics, South-West University "Neofit Rilski", Bulgaria
- Radoslava Kirova, South-West University "Neofit Rilski", Bulgaria
- Dimiter Dobrev, Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Bulgaria
- Martin Madzhov, Faculty of Informatics, South-West University "Neofit Rilski", Bulgaria
- Viktor Dimitrov, Faculty of Informatics, South-West University "Neofit Rilski", Bulgaria
- Milena Dobreva, University of Strathclyde, United Kingdom
- Radoslav Chayrov, South-West University "Neofit Rilski", Bulgaria
- Petrnaka Petrova, South-West University "Neofit Rilski", BulgariaMaya Chochkova, South-West University "Neofit Rilski", Bulgaria
Invited Speakers
- Pei Wang, Temple University, USA (maybe in person)
- Marcus Hutter, Australian National University, Australia (maybe online)
- Nikola Kasabov, Auckland University of Technology, New Zealand (online)
- Valentin Goranko, Stockholm University, Sweden (in person)
- Radoslav Nikolov, CEO of SAP Labs Bulgaria (in person)
Publication
The proceedings of RC-AGI 2026 will be published by University Press in a book containing the extended abstracts.
Venue
The conference will be held in Blagoevgrad, Bulgaria.
Contact
All questions about submissions should be emailed to the official conference email.
You can contact us by phone: +359 73 831 825 (office phone, Radoslava Kirova will answer)
You can also write to us at agi@swu.bg
Sponsors
South-West University “Neofit Rilski”, Bulgaria

