Dynamic Decision Making

Our research focuses on an advanced theory of dynamic decision-making (DM) under uncertainty and incomplete knowledge. This theory aims to strengthen DM processes for both human and artificial agents.

The significance of our theory extends far beyond academic circles, playing a vital role in various contemporary applications, particularly in artificial intelligence. As AI agents navigate uncertain environments, a robust theoretical framework is essential for ensuring effective, reliable outcomes. This theory improves their intelligence and enhances their ability to address complex situations while aligning with specific goals.

Partial solutions derived from this theory have been tested in fields such as medical diagnostics, manufacturing process control, complex robotic systems, economic decision-making, e-democracy, and transportation. These applications highlight the importance of a well-defined decision-making theory in today’s rapidly evolving technological landscape.

Our results are built around three key components

What distinguishes our approach is the thoughtful integration of prior knowledge, theoretical concepts, and available data with individual DM goals. Instead of relying on random replications of past patterns, our method emphasizes careful optimization, accounting for uncertainties, environmental variations, and deliberation resources. This framework enhances the overall quality and effectiveness of decision-making in dynamic contexts.

 

Axiomatic prescriptive decision-making theory

The theory generalises existing theories that focus on the optimisation of expected utility. It models the agent’s environment and decision goals through probabilities [1] and operates solely on them [2].

target, objectives, goal, archery, meet, bow, archery, archery, archery, archery, archery

 

Automatic mapping of decision goals

We offer robust and well-grounded methods for translating available informal or incomplete knowledge and decision goals into probabilities, while accounting for the limited resources of real agents, such as expertise, time for deliberation, data availability, computing power, and energy requirements [3].

 

Efficient Knowledge Transfer

Our research introduces a universal correspondence function that identifies and learns similarities between two related decision-making tasks, enabling efficient knowledge transfer. The effectiveness of this transfer has been demonstrated in a video game environment [4].

Related publications:

[1] Šindelář, J., Vajda, I., Kárný, M., Stochastic control optimal in the Kullback sense, Kybernetika 2008, 44(1):53-60.
[2] Kárný, M., Prescriptive Inductive Operations on Probabilities Serving to Decision-Making Agents. IEEE Transactions on Systems Man Cybernetics-Systems, 2022, 52(4):2110-2120.
[3] Kárný, M., Guy, T. V., Preference Elicitation within Framework of Fully Probabilistic Design of Decision Strategies, IFAC-Papers on Line: Proceedings of the 13th IFAC Workshop on Adaptive and Learning Control Systems 2019, 52(29):239-244.
[4] Ruman, M.; Guy, T. V., Knowledge Transfer in Deep Reinforcement Learning via an RL-Specific GAN-Based Correspondence Function, IEEE Access, 2024, 12.